Join us
analysis

3 tips for understanding science journalism

March 20, 2015

Sign up for The Media Today, CJR’s daily newsletter.

Is Alcoholics Anonymous the most effective way to treat addiction? Some science and health journalists said yes this week. Others journalists said no. This puts readers in an awkward position: How do you make sense of it when good journalists writing for reputable publications appear to claim opposite truths, in both cases citing scientific evidence?

A beautifully written story by Gabrielle Glaser in the April 2015 issue of The Atlantic declares “The Irrationality of Alcoholics Anonymous,” the 12-step program that has become synonymous with addiction recovery since its 1935 debut. She points out that AA is inherently difficult to study, given its anonymity, and so its results have never been proven scientifically. Much of our faith in the program is a legacy of a time when we knew less about the biology of drinking, and anecdotally, many struggling with alcoholism have found AA off-putting, particularly if their beliefs don’t ascribe to the “higher power” that is a backbone of the program. Given the ubiquity of 12-step rehab programs, they felt they had no other options. This is a problem, Glaser states, because it causes needless suffering. She advocates for a serious look at different treatments to addiction that have shown tangible evidence of being effective.

But on the same day that Glaser’s article was published online, New York Magazine printed a thoughtful piece by Jesse Singal on “Why Alcoholics Anonymous Works.” The piece directly challenged The Atlantic’s, arguing that Glaser left out key information. Her article is weighted on the Cochrane Collaboration, a 2006 review of four decades of research that found that no studies “unequivocally” showed the effectiveness of AA or similar treatments. But, Singal writes, in the decade since the Cochrane report came out, new research suggests that AA actually is effective. He points to promising data on “12-step facilitation” (TSF) programs, designed as clinical interventions to guide people into AA. Data on them will be integrated in an update of the Cochrane report this year by its original lead author. “In other words,” Singal writes, “the most comprehensive piece of research Glaser is using to support her argument will, once it takes into account the latest findings, likely reverse itself.”

Glaser, for her part, says via email that TSF programs, which are based on one-on-one counseling, are notably different than AA and 12-step rehabilitation, so the evidence on them can’t be easily transferred. She also notes that her piece “doesn’t say AA and 12-step programs don’t work. It says there’s no conclusive data on how well they work.”

The debate on 12-step programs is not new. But the articles by Glaser and Singal point to a larger challenge. One of the most frustrating things about reading (or listening to) science journalism is trying to resolve contradictory claims. Coffee is good for you; no, it’s bad for you. Wearable technology, like the new Apple Watch, can harm your health; wait, no it can’t. Eating meat is good for you; no, don’t eat meat! Some people, like many of those who oppose vaccines, ease the tension by deciding that science is all relative—just a matter of personal opinion. But of course, it is more than that.

“When it’s at its best, science journalism works much like science,” said Wade Roush, acting director of the Knight Science Journalism program at the Massachusetts Institute of Technology. “Competing claims get tested and weighed against the available evidence. There’s often room for debate about what constitutes reliable evidence.”

Sign up for CJR’s daily email

Here are three tips for navigating that debate, so that those of us trying to understand conflicting science journalism can get closer to the real story.

Be wary of sweeping claims. (And journalists: Don’t make these claims.)

This isn’t just the territory of advertisers anymore. In the era where every click counts, headlines and social media pushes making broad generalizations are more popular than ever. And it’s natural for people to look for a clean takeaway from the articles they read, especially on subjects like addiction recovery that profoundly impact lives.

Unfortunately, sweeping claims inevitably simplify the science. In almost every case, there is nuance and complexity behind the display copy.

“All things being equal,” said Singal, “I have more trust in journalists who include caveats in their reporting, and who talk about method limitations.”

The two AA articles illustrate the subtlety here. Despite the immediate framing of the articles—AA is irrational vs. AA works—Roush points out that the journalists aren’t wholly contradicting each other. “It’s a case where two writers are disagreeing about how to much weight to give to different collections of evidence,” he said. As scientists continue to study these questions, we’ll get a clearer understanding of the terms.

Singal also mentioned that journalists go through the same process that readers go through in learning to look beyond the scientific hype. “It’s incredibly tricky for science reporters who are inundated with studies every day, and press releases that say things that are a little different than what the study says, or there’s statistical trickery going on. … I wish there was a more systematic way of doing it, but at certain point, you have to make choices by going with the institutions we trust.”

Keep an eye on the evidence, not just the rhetoric or the narrative.

Does the journalist cite scientific experts with legitimate credentials? Were the research studies designed in a way that fairly evaluated the data? “The results of a randomized controlled study, for example, are almost always more believable than those of an observational or prospective study,” Roush said.

This “gold standard” of trustworthy evidence can sometimes be carelessly implied in media reports. Glaser said that sometimes research is “reported as studies when in fact they are observational studies”—not randomly controlled, not double-blind. That is plainly misleading readers on how much weight they should give to the evidence. Journalists should be clear on the difference themselves, and be transparent about it in their copy; readers should have a due amount of skepticism about media sources that are lax on this.

Glaser also pointed out that “sometimes reporters on deadline don’t have the time to research subjects in depth.” That might be understandable, but the consequences can be terribly unforgiving. “Imagine what would have happened if people had had the time to question Andrew Wakefield’s study of 12 children—12!—in the 1990s,” Glaser said, referring to the infamous study that claimed to show that measles, mumps, and rubella vaccinations cause autism. “Anthony Fauci (director of the National Institute of Allergy and Infectious Diseases) wouldn’t have to be on the offensive about a disease this country vanquished decades ago.”

It might not be possible to do a randomized controlled study on certain subjects—Alcoholics Anonymous, for example—but readers should be on the lookout for research studies that are as rigorous as possible given the circumstances, and that tailor their scope in a meaningful way. Because, however the story is framed, “the data itself, if it was properly collected and reviewed, is something you can’t argue with,” Roush said. “It’s not opinion: It’s a set of facts about something the scientists observed in the world.”

Be willing to change your mind. And then change it again.

Remember that science is a process, not a collection of facts. (Though facts are revealed in that process, of course.) Just as scientists develop new conclusions when new and better data comes on the table, so readers—and journalists—should be prepared to do the same thing. Changing your mind in the face of new information doesn’t make the earlier data invalid, Roush points out. Nor does it suggest that the science is all a matter of opinion.

“Readers need to understand that science is not truth: It is a self-correcting search for truth,” Roush said. “The fact that scientists don’t always agree is not a sign of weakness; it’s a sign that the process is working correctly.”

It also is a sign that the process is genuinely interesting. There are surprises along the way, and scientists will consistently find results that are unexpected … or, to put it another way, results that are newsworthy.

Anna Clark is a journalist in Detroit. Her writing has appeared in ELLE Magazine, The New York Times, The Washington Post, Next City, and other publications. Anna edited A Detroit Anthology, a Michigan Notable Book, and she was a 2017 Knight-Wallace journalism fellow at the University of Michigan. She is the author of The Poisoned City: Flint’s Water and the American Urban Tragedy, published by Metropolitan Books, an imprint of Henry Holt. She is online at www.annaclark.net and on Twitter @annaleighclark.