Sign up for The Media Today, CJRâs daily newsletter.
My father, who spent two years in Plattsburgh, New York, as an Air Force doctor during the Vietnam War, used to say that the military had to be able to pick up clerks at a desk in the US, fly them to Saigon, and have them continue typing the same sentence as if they were still sitting in their US air base.Â
I think it must be similar to end one day as a politics, business, or real estate reporter, then show up at workâfrom home, at the momentâand be told that youâre on the health beat, covering the coronavirus pandemic. The language may be the same, but the jargon is completely different. What is a confidence interval? (Itâs sort of like a margin of error in a poll, but not exactly.) What is a P value? (Even scientists struggle to explain those, so I wonât try here.)
I imagine that the dread a newly transferred coronavirus reporter feels when faced with a PDF filled with statistics is the same as I would feel ifâas a career-long doctor-turned-medical journalistâI was suddenly assigned to cover the statehouse. I mean, what the hell is cloture? âHow a Bill Becomes a Lawâ was great when I was a kid, but Iâm reasonably sure it would serve me as well covering Congress as an English majorâs junior high biology class would serve them covering epidemiology.
Thatâs particularly true when the science is moving at dangerous speedsâwhich, as Adam Marcus and I wrote for Wired in March, it currently is. Research findings are never vetted as carefully as many scientists, medical journals, and others would like us to think they areâhence one of the reasons for Retraction Watch, which Marcus and I created ten years agoâbut now they are being posted online essentially without any review at all, testing, as the New York Times wrote in April, âscienceâs need for speed limits.â
One could argue that a pandemic like COVID-19 shifts the calculus on the benefits of embargoes, but that only holds if reporters use the additional time to digest findings and call experts unrelated to the study for comment.
Those findings are appearing in papers by the hundreds on what are known as preprint servers such as bioRxiv and medRxiv. (The awkward capitalization is an homage to the original preprint server, arXiv, created in 1991 mostly for physicists and mathematicians.) Researchers submit their work for posting on such servers before peer review, and often update manuscripts before they are âofficiallyâ published in journals. That means they are being disseminated to scientists more quickly than would be possible if they went through weeks or months of peer review, even if journals are speeding up that process dramatically during the pandemic.
Speed can be a good thing, if researchers understand the context. But it also means that reporters eager for scoops are seizing on what sound like important and impressive findings that are likely to soon be meaningless. For decades, the embargo has functioned as an artificial speed limiter on when a lot of research reached the public. In exchange for prepublication accessâusually a matter of several days, to provide time for reading and reportingâjournalists agree to a particular date and time for the release of news. Such embargoes (which are common to journalism more broadly) are ubiquitous in science and medical reporting.
For many years, prompted by Vincent Kiernanâs Embargoed Science, Iâve thought that the downsides of embargoes are greater than their benefits. Many journalists essentially hand over control of what they will cover and when to journals, whose interests lie in drumming up publicity and recruiting splashy manuscripts from researchers. Journals have also typically scared researchers out of talking to reporters before the work appears in their pages, sayingâor sometimes just strongly implyingâthey wouldnât consider their manuscripts for publication, a risk in a publish-or-perish world.
One could argue that a pandemic like COVID-19 shifts the calculus on the benefits of embargoes, but that only holds if reporters use the additional time to digest findings and call experts unrelated to the study for comment. And the reality is that journalists donât have as much time as they need.
What is a newly transferred medical reporter to do? I would hope a veteran statehouse reporter would take me aside to give me a tour of the beat, so Iâll try to do the same here, in a framework Iâve used in a guest lecture Iâve given at Columbia Journalism School for some years at the request of adjunct Charles Ornstein, a veteran healthcare reporter and editor at ProPublica: âHow Not to Get It Wrong.â See you on the front lines.
Always read the entire paper
Press releases about studiesâwhether they are from universities, journals, or industryâuse spin. Study abstracts arenât much better. Most publishers are more than happy to provide PDFs of paywalled papers, if need be, as are authors. And such access is a benefit of membership in the Association of Health Care Journalists, which offers a wealth of other resources, from a listserv to tip sheets to fellowships, that make for career-long learning. (Disclosure: Iâm volunteer president of the board of directors.)
Ask âdumbâ questions
Many science reportersâparticularly those who are trained in scienceâare afraid that sources will judge them for asking what seem like basic questions, only to end up with a notebook full of jargon without really understanding the story. Remember that your primary loyalty is to your readers and viewers, who donât know the jargon, either.
Ask smart questions
All right, so you want to impress a source. Fine: use that instinct to dive deeper. Where (if at all) was the study published? Was it in humans, or animalsâwhere results often donât translate to clinical practice? Was the study designed to find what it purports to find? Did the authors move the goalpostsâin scientific lingo, endpointsâmidstream?
Read the discussion, look for the limitations
Good journals wonât let authors get away with leaving out the limitations of their work. Was the sample size skewed? Could something that the study couldnât control for render the results less meaningful? The limitations are there, in all their glory: selection bias, measurement bias, recall bias, lack of a control group, and more. Look for them.
Figure out your angle
Itâs fine to write about a preliminary study because itâs likely to lift the prospects of a local company, or because the science is fascinating. Just donât make it sound as though the findings are a cure for coronavirus infections.
Avoid disease-mongering
A treatment may work perfectly, but only for a small population. Or the FDA may approve a drug, but only for limited indications. Donât imply that millions suffer from a disease when only a handful doânor that a condition is life-threatening when itâs just a nuisance.
Quantify
Okay, so you went into journalism because you hate math. But your readers and viewers canât make better decisions if all you tell them is that âpatients improved if they took the treatment.â By how much? Would you choose a car because someone told you it was cheaper, but not how much cheaper?
What are the side effects?
Every treatment has them, and youâre unlikely to find them listed in a press release or abstract. Dig in to Table 3 or 4. Or ask another âdumbâ question.
Who dropped out?
Very few studies end with the same number of people with which they started. After all, people move, or get bored of being in a study, and theyâre not lab mice, so theyâre free to do what they want. But sometimes more people drop out than average, and that could be a cause for concern. Find out if the authors have done what is known as an âintention-to-treat analysisâ to keep themselves honest. (I even wrote a letter to a journal about a case where they didnât, after editing a story about a particular study. Yes, Iâm a geek, and no, you donât have to go that far.)
Are there alternatives?
The fact that a new drug worked wonders sounds great, until you learn that there was no control group, so you have no idea what would have happened if the participants hadnât had the drug. The same goes for observational studies that claim a link between a particular diet or lifestyle and health. As I often say to my students, itâs difficult to smoke while youâre on the treadmill. Healthy behaviors tend to go together.
Who has an interest?
Read those disclosures at the ends of papers, keeping in mind that while most clinical trials are funded by industry, such ties are linked to a higher rate of positive results. And these conflicts can become stories with impact themselves, as Ornstein and Katie Thomas of the New York Times have shown.
Donât rely only on study authors for the whole story
The same way youâd want to get outside comment on any other story, seek the thoughts of experts unrelated to a study or finding. I explain how I do that here, in a blog post for science writer extraordinaire Ed Yong, now at The Atlantic, in his early days at Discover.
Use anecdotes carefully
Narrative ledes, not to mention profiles, can be very powerful. But they can leave an impression that a treatment worksâor injuresâwhen it doesnât. As I tell my students, itâs difficult to interview people buried in a cemetery. If youâre only including success stories, youâre not painting a full picture.
Watch your language
Correlation is not causation. (Or, as my father used to say, âTrue, true, unrelated.â) Donât use words like âreduceâ or âincreaseâ when all you know is that some factor is correlated with another. âLinked,â âtied,â or âassociatedâ is more accurate. For fun with spurious correlations, read Tyler Vigen.
Some final words: Despite my years in medical school, I need frequent refreshers on how to read medical studies. For those, I recommend getting to know a biostatisticianâparticularly during a pandemic.Â
THE MEDIA TODAY:Â The problem of seeing the pandemic through a partisan lens
Has America ever needed a media defender more than now? Help us by joining CJR today.