Sign up for The Media Today, CJR’s daily newsletter.
Last month, after Hurricane Sandy struck, I published a story about climate science. Divisive issues swirling around global warming tend to provoke readers and, in this case, spawned 47 comments.
“One storm and all the global climate gloom and doomers come crawling out of the woodwork,” Josh Irons groaned. “There’s been no change in 15 years—proven.” In response, Abobo offered: “*yawn*.”
One inane rift, apparently, was irresistible: “Climate change schmimate change. A colossal hoax.” I winced at its power to arrest further discussion, but only until ensuing responses grew into the longest and most substantive thread on the page, parsing the distinction between proof and evidence, inductive and deductive reasoning.
For all the fuss over comment sections being either plagued by cliques—cited in Gawker’s temporary decision earlier this year to disable comments—or soured by vulgarity, the discussion following my piece was balanced and civil, even harmonious.
The recipe at The Atlantic and across major online news platforms has been simple: moderate and rank posts, vet commenters, and design the forum with threading and sharing features that streamline the user experience. By tucking comment sections under the editorial tent, trashy discussion can be redeemed.
“Readers are part of the conversation, and they’re part of the content of the site,” said Bob Cohn, digital editor at The Atlantic. Sometimes, he added, “the comment thread is at least as illuminating as the underlying piece.”
Thoughtful readers deserve a decorous, accessible outlet to voice opinion, to debate, and to further report stories from their vantage point, which can even spur fresh coverage.
But readers aren’t journalists. Still, according to new research, the distinction may be blurring.
A study published last month in the Journal of Computer-Mediated Communication shows that the way certain readers see news stories can be distorted by the comments below them. Especially when someone cares deeply about the issue being covered, disagreeable commentary may stoke concern over media bias.
Eun-Ju Lee, the study’s author and a professor in the department of communication at Seoul National University, in South Korea, recruited 240 participants to read an online news article about corporal punishment in elementary schools. They answered a pre-test questionnaire that measured their emotional stake in the issue, their stance, and their communications with others who either agreed or disagreed. Then, at their leisure, they read the story, a neutral portrayal of Seoul’s education superintendent pushing to ban corporal punishment, balanced by equal arguments on both sides.
By random assignment, half the readers encountered a comment section largely in support of the ban, with posts like this one: “You keep talking about educational authority, but educational authority does not come from physical punishment.” The other half saw mostly opposition: “The superintendent has no idea what he’s doing. Gotta say, he’s lost touch with reality, and life is gonna be a living hell for teachers.”
As expected, a control group reading the article without comments found it impartial. So too did readers who, in the pre-test questionnaire, identified as being relatively unmoved by news concerning corporal punishment.
For those with a personal stake, however, comments clashing with their own opinions raised suspicion of media bias, a phenomenon known as the hostile media effect. The commentary colored their reading.
“User-generated commenting is a key characteristic of Web 2.0,” Lee said of the findings. “When coupled with the news article, it can bring changes to the readers’ interpretations of news and the reactions they exhibit.”
We know that when readers are impassioned by an issue, they have a nose for media influence—real or perceived. If their own views and those of commenters appear in discord, these readers become defensive, figuring that the coverage swayed the others and, therefore, must be biased.
But Lee also suspects that keeping news and comments in proximity, now standard practice, inclines readers to jumble the two, and to misremember authorship. “People might have difficulty distinguishing what they read from which source,” she said, noting another recent study in which participants exposed to dismal commentary judged the news outlet to be correspondingly poor.
A similar “assimilation bias” was uncovered in a pivotal 2008 study led by Joseph Walther, a professor at Michigan State who has written prolifically on cyber psychology. The findings showed that whether Facebook users were voted “hot or not” depended, in part, on their friends—how attractive they were and even what they posted. How participants judged the primary content (a Facebook user) was shaped by the context (Facebook friends and their posts).
“As Web sites become more interactive and participatory,” Walther and his co-authors concluded, “the question of textual authority becomes less clear.”
And some news sites are even going beyond integrating the prose of readers and writers.
At The Atlantic, Ta-Nehisi Coates’s blog will experiment by awarding some commenters moderator privileges. The Huffington Post is launching a new commenting platform that, I’m told by Community Editor Annemarie Dooling, will “put the power in users’ hands.” And while The Times was slow to tweak its meticulous system—which involved reviewing every comment before posting—last year articles and blogs began sharing their pages with comments, and select commenters received a status upgrade that spared them moderation.
“It was about ease of use,” Sasha Koren, deputy editor of interactive news at The Times, told me. “It was more about the experience of commenting than a sort of philosophical shift.”
The delicate balance between exacting stringent editorial standards, even empowering commenters to execute them, and keeping readers from encroaching upon the newsroom is familiar to Yoni Appelbaum, a correspondent for The Atlantic. A few years ago, his commentary impressed Cohn enough to secure a job offer.
“[If] writers and editors are willing to engage with and police the comments,” Appelbaum said, “I think it can yield tremendous dividends.” But only “to the extent that it is treated as something external to the site, as a space the readers own, as opposed to as a forum of the publication,” he added.
In a recent Q&A at the Times, Koren explained that much moderation is done by hand because “[w]e see these comments as an extension of our journalism.” Speaking with me, she clarified that “We’re not considering [readers] as akin to reporters. I think readers get that distinction, that what readers are sharing is by nature subjective.”
If so, subjectivity is still compelling. Part of the reason seems psychological: Conflating specific content (news) with the wider context (comments) and defending against perceived media control are two mental quirks that can incite distrust of the coverage. But outlets’ push to cure online incivility—to ratchet up screening and reward good behavior—may put users at greater risk of warped reading. Quality control, as Lee suggests, must also account for “the possibility that the comments can change the very perception of the article itself.”
Has America ever needed a media defender more than now? Help us by joining CJR today.