Sign up for The Media Today, CJR’s daily newsletter.
Journalists, investigators, and researchers who use social media platforms for their research know how distressing and traumatic the content those platforms host can be. For the past decade, details from the world’s conflicts have been shared on platforms including Facebook, YouTube, Twitter, Instagram, Snapchat, and TikTok, among others. The aftermath of air strikes, illegal killings, torture, decapitations, grieving parents and orphaned children—if you can imagine it, you can probably find it. Many say there are more hours of the Syrian conflict on YouTube than there have been in the conflict itself.
That’s why a May settlement from Facebook, dealing directly with the impact of secondary or vicarious trauma on its moderators, felt like a vindication. The settlement came as the result of a case brought in California by a former content moderator; for years, many have called for recognition of the impact of viewing this content. The out-of-court agreement will see over eleven thousand of the platform’s former and current content moderators receive a minimum of $1,000 US as compensation for viewing distressing content as part of their job. (Moderators could receive more if they are diagnosed with post-traumatic stress disorder due to their work.) The total outlay for Facebook is in the region of $52 million US.
Such social media content has at times been extremely consequential: the United Nations, for instance, has cited content posted to Facebook as evidence of crimes against humanity, and the International Criminal Court has used videos of extrajudicial executions to issue arrest warrants. Filtering this content is crucial, and while machine learning is starting to be used, it still requires a human eye. Yet such a need comes with a cost—the toll on mental health.
Journalists largely began to cotton to the value of content posted to social media platforms in 2010, at the start of the Arab Spring. Prior to then, in breaking-news situations, news organizations historically relied upon stories from news agencies or state broadcasters before they could deploy their own staff. And even when foreign correspondents parachuted in, the video sent back was highly filtered, with gory context often removed before it even hit the editing room. Videos posted to social media changed this. They offered a different side of the story—a perspective from the ground. Journalists could see the benefit in seeking out and embedding such video within their stories—on the death of Colonel Qaddafi, or the siege of Homs, or the rise of Daesh. News desks began to value skills in sourcing and verifying social media content for publication.
But even as editors increasingly championed such skills, little thought was given to the toll that viewing such content could take on mental health. The problem had not been encountered before at such scale. Videos posted by amateurs showed—and, indeed, often dwelled on—every gory detail. Images started to travel from the field to the heart of the newsroom without any editorial filter. As this was all very new, the role of discovering content was often given to younger journalists—those who “got the internet,” were tech savvy, and had the initiative to go looking for content. These junior staff members didn’t always feel they had the power to speak up about the horrors they were seeing as they sat behind their screens, cut off from newsroom discussion as they listened to video via headphones. They were often afraid to speak up, thinking that doing so might affect their careers. They were afraid of hearing their editors say, “If it’s so bad, maybe this job isn’t for you.”
In 2014, I worked with Claire Wardle, now of First Draft, and Pete Brown, of the Tow Center for Digital Journalism, on a project examining the role of user-generated content in broadcast news. A key finding of that report was that vicarious trauma was “beginning to receive recognition as a serious issue, and news organizations must strive to provide support and institute working practices that minimize risk.” This led to my 2015 report, “Making Secondary Trauma a Primary Issue: A Study of Eyewitness Media and Vicarious Trauma on the Digital Frontline,” which was cited by the plaintiffs in the case settled with Facebook.
The goal of this research was not just about finding out what was happening; it was about sending up a signal to say, “Vicarious trauma is a serious issue; we have to take it seriously.” We saw a toxic situation in most organizations we spoke to, where the higher-ups failed to notice the impact vicarious trauma was having on the staff, and where requests for even the most basic help were turned down, even where staff had left or been out sick for extended periods. In one organization—based in a large city widely perceived as safe—an interviewee had asked to sit by a window for some respite from the daily grind of death and destruction they were viewing from their office. The request was turned down. Yet there were signs of hope. Some organizations, and the managers they employed, had started to recognize and take vicarious trauma seriously.
Our mapping of vicarious trauma in newsrooms and human rights organizations was a small contribution to this change. Sessions on vicarious trauma are now regularly included in flagship industry events, including the Online News Association conference and the International Journalism Festival. Regular training now takes place, to show how to recognize and mitigate the worst aspects of viewing content. And, most importantly, senior management in many organizations now acknowledge vicarious trauma as an issue (although, of course, improvements could be made).
More research is being done into establishing mitigation techniques for viewing distressing content, such as recognizing the value of working together as teams to unravel distressing, difficult topics. At the same time, as the practice of using content sourced from the public has gone mainstream (the work of the New York Times Visual Investigations Team was recognized with a Pulitzer Prize in 2020, for instance), the feeling that social media newsgathering is a job primarily for new journalists has dissipated.
However, new challenges have started to appear as journalists investigate disinformation online. Away from viewing distressing content, researching platforms that host conspiracy theories or extremist forums and hate speech can be distressing in its own way. Doing this work requires self-care as well. Gathering and reading testimonies of covid-19 victims and their families brings in extra stress while doing work from home.
This is why Facebook’s recognition through the settlement is important. The argument has never been that these jobs should not exist, that they are not important. Journalists need social media platforms to find and tell stories. It is part of the job and has never been more important than today, when travel has become harder with much of the world in lockdown. Content moderators are needed to make sure that gratuitous violence is kept away from our screens—and machine learning tools cannot make the judgment about what should be taken down. That Facebook now recognizes the toll this can take on the mental health of its content moderators should be the call for organizations working with social media to monitor the impact of conflict and violent events all over the world, to ensure that the well-being of those doing this work is placed front and center.
Has America ever needed a media defender more than now? Help us by joining CJR today.