How can we improve the accuracy with which evidence and statistics are reported?

The Well Column of The New York Times recently suggested that too much exercise could be bad for your health based on the results of a study, the flaws of which Justin Wolfers eloquently explained in a subsequent issue of the Times. While this was a particularly embarrassing case of journalists giving more weight to a study than it deserved, it is hardly unique. We regularly hear or read that food stuff X (coffee, butter, red wine) is bad for our health, only to read some months or years later that a new study shows the same food is in fact good for us. This cycle of excitement and dissolution, whether it is about a new way to prevent cancer, reduce poverty, or improve children’s IQ is not healthy for the public’s faith in science or journalism.

Recently the Department of Communication at Pontificia Universidad Católica de Chile (under the leadership of Gonzalo Saavedra, who opened the session) and J-PAL LAC brought together some of the leading journalists in Chile to discuss how journalists can improve their presentation of evidence and statistics. Paula Escobar-Chavarría moderated the session. My talk (slides available here) focused on the need for journalists, when reporting on a study, fact, or claim, to clearly distinguish between four different types of evidence: descriptive evidence; process evidence, correlations, and causal evidence. All could be newsworthy. Francisco Gallego, in response to a question about whether correlations should never be considered newsworthy, gave the example of how important it would be to report that wages were lower for women than men even if there was no way to identify that the relationship is caused by discrimination. The mistake would be to report a correlation in a way that suggests causation.

journalist_workshop_march2015

 

I suggested the following checklist of questions journalists can ask when reporting on a claim:

1.     What type of claim is this: descriptive, process, correlation, or causal impact?

2.     Does the type of evidence match the claim? Is the claim causal but the evidence only a correlation?

3.     If the claim is descriptive, is the sample large and representative?

4.     If the claim is causal (i.e., about impact), what is the counterfactual: How do we know what would have happened otherwise?

5.     Are there reasons to think that those who participated in the program/ate a certain type of food/went to a type of school are different in other ways from those who did not?

How could a journalist make a story come alive and yet be true to the evidence? Telling individual stories was a powerful communication tool. Rather than interviewing a few (unrepresentative) people and drawing wider conclusions from those interviews, journalists could illustrate the findings of a representative and well-identified study by seeking out individuals whose story reflected the study’s findings.

The precise use of words is something journalists care about. Everyday words like impact, led to, cause, or the result of imply causal evidence. They should only be used when we know what would have happened otherwise. When reporting on a study that describes a correlation, it can also be helpful to draw attention to this fact and discuss why the correlation may not be causal. (I spent a frustrating thirty minutes stuck in traffic listening to an--admittedly hour-long--NPR show on the higher-than-average rate of suicides among those taking antidepressants without once hearing the moderator ask, “Could this correlation be due to the fact that those on antidepressants are depressed and thus more prone to suicide?” The side benefit of this episode was that my six-year-old son in the back seat got an impromptu lecture on the difference between correlation and causation, similar to that which I received from my mother at about the same age).

Journalists have an important role in holding politicians, NGOs, and others to account for the claims they make. Rather than take these claims at face value, journalists are in a perfect position to probe and ask: How do you know that your policy caused that change?

As experts in communication, journalists also have an important role in communicating the results of studies that may be written in complex language. But communicating a complex argument both clearly and accurately usually requires investing time in understanding the study and it may require asking for help in how to express the findings, either from the authors or from other experts--something that journalists are often reluctant to do. (I realize that journalists consider it against their standards to show a draft article to the interviewee or study author, but isn’t there a higher duty to accuracy?)

One of the participants at the event commented that journalists can be reluctant to say “we don’t know” even though sometimes that is the only accurate reading of the evidence.  But despite the many challenges to improving the accuracy with which evidence and statistics are reported in the press, I came away energized and optimistic. Throughout the discussion, journalists openly discussed the profession’s (and often their own individual) failings, happily engaged on difficult questions, and said they would welcome more advice and input to improve their reporting. This group was a very select sample of the most serious journalists in Chile but I was left to wonder whether there would be the same appetite to engage on these issues in other countries. Are US journalists too proud to admit they need help?