Sign up for The Media Today, CJR’s daily newsletter.
It’s hard to agree with a study last week that claimed academic medical centers hype their research through their press releases, once you look closely at the process they used to reach that conclusion. The releases are, after all, fundamentally intended to attract reporters’ interest. But the study claims that medical centers go too far in trying to do so.
The researchers’ intent—presumably—was to warn the public of shortcomings in these med-center missives to the media. But instead, they may have muddied the waters by not better understanding the topic and implying that press offices, rather than journalists, are responsible for poor reporting.
Their report, published in the respected Annals of Internal Medicine, concluded: “Press releases from academic medical centers often promote research that has uncertain relevance to human health and do not provide key facts or acknowledge important limitations.”
The researchers looked at press releases issued in 2005 by ten academic medical centers of highest rank and ten of lowest rank according to U.S. News & World Report’s rankings. They analyzed the releases based on nearly three dozen criteria, such as whether or not they included basic details and cautions about the research at hand. Follow-up interviews with information officers at the medical centers provided additional information about the schools’ communications practices.
Predictably, some major news media just parroted the report’s findings without much question and fell into a familiar trap of repeating claims and fixing blame. The Wall Street Journal’s health blog and USA Today’s On Deadline blog seemed to see the report as a way to shift the responsibility for poor medical reporting from the journalists to the PR types. Perhaps they were influenced by Annals of Internal Medicine’s own press release about the study, which totaled only 141 words. Other news media followed suit.
I’ve reported on biomedical research for three decades at one of the country’s largest academic medical centers, so the problems I see with this study—and its conclusions—might be somewhat obvious.
The study argues that many of the centers’ press releases overstated the immediate or future clinical relevance of the work mentioned, frowning especially on the promotion of animal studies. But anyone vaguely familiar with biomedical research knows that animal research is a necessary precursor to human clinical studies. And while it is true that some releases need more warnings about extrapolation to human health, stating what kind of treatments the research could lead to is perfectly legitimate.
More importantly, research at academic medical centers often focuses on the biomechanics that occur at the cellular level, and not always on studies involving new clinical treatments. The logic is to understand the biology at hand in hopes of using it for new findings in the future.
Immediate and assured clinical relevance isn’t a criterion that most scientists would categorically embrace before beginning their research.
Drummond Rennie, former deputy editor at the New England Journal of Medicine, once told Esquire that about half of what was published in that prestigious journal eventually was proven wrong. That is the nature of science and it doesn’t diminish the value of that journal’s offerings.
Other problems with last week’s press-release study abound.
The determination that a particular release was “hyped” was made by two research assistants, designated as “coders,” who evaluated the releases based on a variety of criteria such as the inclusion of key study details (design, size, etc.) and overall clarity. There was a high degree of agreement between coders, but, as the report readily admits, their decisions about whether or not something was exaggerated were subjective. Furthermore, gauging the accuracy of any given release would require ample experience in science/medical reporting, and the report offered no description of the coders’ qualifications for carrying out that task.
For the interview portion of the study, a team member interviewed “the person in charge of media relations” at the medical centers, but it has been my experience that the senior PR official at most institutions has little, if any, hands-on experience in covering research. Better to have discussed the process with the staff science or medical writer if they wanted more accurate answers.
The study pointed out that few of the press releases “provided access to the full scientific report.” But they failed to add that, in many cases, copyright protections prohibit institutions from providing copies of such papers to the news media. The journals publishing these reports hold fast to their legal right that only they can distribute the papers carried in their journals. Omitting that point casts undue blame on the medical centers.
The researchers also faulted some releases for lacking outside review, but that is rarely, if ever, used on such releases, regardless of the subject. Unlike the inherently slow process of journal publication, the rush of daily journalism can hardly accommodate the time needed for outside review. However, at trustworthy communications offices, press releases endure an editing process that is similar to the ones employed in most newsrooms. One or two editors on staff who are experienced in evaluating research with an eye towards news edit the release. It is then sent to the lead researcher, who is asked to check it for technical accuracy alone, not for any editorial slant. No deans, directors, department chairs, or administrators review the story prior to dissemination. This prevents overt PR intent from affecting the story. That’s how it works at our shop, and at a host of other big-time places.
Interestingly, the Annals of Internal Medicine study reported that at all twenty medical centers, it was the individual researchers who requested that their work be covered in a press release, and that at more than half of the centers, administrators made similar requests. There was no suggestion that qualified science or medical writers on staff were functioning as reporters and seeking out newsworthy stories—the practice routinely followed at most major research universities, nor were there any questions regarding this practice in the script (pdf) the authors used in gathering data.
The study team recommended that academic medical centers simply reduce the number of releases they issue; in particular, the report said, they should avoid reporting work presented at scientific meetings that has not been published.
But doing so would cloak an important part of the way that science is done in academe. It would also impede the ability of knowledgeable journalists to cover research that the public has, in many instances, funded and determine where on a continuum from conjecture to conclusion it lies.
To be fair, some press releases are, in fact, as spotty as the study implies. But doing fewer press releases isn’t the answer. Doing better press releases is.
Equally important is halting the decline in staff jobs for well-trained science journalists, and improving science training for all journalists who want it (to its credit, the Annals of Internal Medicine report acknowledges this and even suggests a few workshops). For instance, reporters should know that human trials are more noteworthy than animal trials; that phase-3 clinical trials overrule phase-1 trials; and that multicenter, randomized, double blind trials with large samples are more reliable than others.
Medical centers should absolutely work to improve research communications and reduce instances of hype. But in the end, The Washington Post’s perspective on the press-release report seems most correct:
“Journalists, read the darned studies!”
If the release conflict with the study’s findings, ignore the release. But either way, get reporting.
Has America ever needed a media defender more than now? Help us by joining CJR today.