Join us
The Media Today

The final stretch of the election in a content-moderation void

September 5, 2024
Elon Musk's Twitter profile displayed on a screen and reflected Twitter logo displayed on a phone screen are seen in this illustration photo taken in Krakow, Poland on April 14, 2022. (Photo illustration by Jakub Porzycki/NurPhoto via AP)

Sign up for The Media Today, CJR’s daily newsletter.

In recent weeks, various world governments have signaled a desire to clamp down on tech companies that refuse to moderate themselves. In Brazil, a supreme-court judge blocked X nationwide because the platform didn’t comply with court orders to suspend certain accounts and lacked legal representation in the country. (The refusal to comply suggested that the company “considered itself above the rule of law,” according to a judge asked to review the decision.) In France, meanwhile, Pavel Durov, the Russian-born CEO of the messaging app Telegram, was arrested for refusing to cooperate with legal authorities regarding criminal activity on the app, including the sharing of content relating to child sexual abuse and drug trafficking. Citing a free speech defense, Durov had previously resisted answering regulators’ inquiries and handing over documents about criminal activity on the app—which, according to the Financial Times, forms part of the French prosecutors’ complaint.

Increasingly, tech companies are also resisting calls to moderate themselves more strongly in the US, albeit without facing a governmental sledgehammer in response. Last week, Mark Zuckerberg, the CEO of Meta, wrote in a letter to the House Judiciary Committee that he regrets being “pressured” by the government to “censor” certain content related to the COVID pandemic in 2021. This was considered a big win for Republicans, who have insisted that tech companies’ content decisions have been tilted by an anti-conservative bias. (Steven Levy, at Wired, described Zuckerberg’s remarks as “a mea culpa where he seems to indicate that there was something to the GOP conspiracy theory.”) The letter said that Meta would “push back” if something similar were to occur again. 

And Musk, who calls himself a free speech “absolutist,” has drastically backslid on content moderation efforts at X since acquiring the platform in 2022. This has included ditching an election integrity team and reducing the platform’s trust and safety staff by a third. He even changed the name of the latter to simply the “safety” team. “Any organization that puts ‘Trust’ in their name cannot [be] trusted, as that is obviously a euphemism for censorship,” he tweeted.  

Musk also reinstated Donald Trump’s account after he was suspended from the platform for inciting violence on January 6, 2021. Trump—who, in the meantime, had started his own social media platform, Truth Social—didn’t immediately jump back on X. But after several years of (almost total) hiatus, and with his third presidential campaign nearing its conclusion, he is now back and active on the platform, engaging in familiar posting habits “involving entirely capitalized words, insults and nicknames, and exaggerations,” as described by Newsweek. Recently, Musk even did a friendly two-hour interview with Trump, without challenging him on inaccurate statements. X is a very different environment for Trump now than it was during the 2020 election and its aftermath. 

As Yona TR Golding wrote for CJR earlier this summer, tech companies aren’t just rolling back content moderation efforts—they’re also making it harder for journalists, researchers, and election observers to study what’s happening on their platforms. Musk put X’s application programming interface, or API, behind a paywall, charging researchers up to tens of thousands of dollars for data that was previously free; as a result, over a hundred projects were canceled, halted, or pivoted to other platforms. Then, last month, Meta got rid of CrowdTangle, a social media monitoring tool used to track misinformation on Facebook and Instagram. Meta promised to give researchers a “better” replacement tool, though an investigation that Kaitlyn Dowling and I undertook for the Tow Center found that this new tool is far less comprehensive and transparent. 

On X, Musk has kept one content moderation effort in place: a crowdsourcing feature called Community Notes. (Originally, this was known as BirdWatch.) Community Notes relies on anonymous volunteer contributors to identify misleading posts and propose corrections, or “notes.” A note will only be made visible to the public if enough users with diverse perspectives agree that it’s useful. Well-known figures including President Biden, the short-lived British prime minister Liz Truss, and Musk himself (to his frustration) have been “corrected” by notes. 

Sign up for CJR’s daily email

Musk has praised Community Notes as a “game changer,” emphasizing its ability to provide users with accurate information without political bias. The effort seems to have inspired YouTube, which announced in July that it is testing a similar crowdsourcing feature. Certain users on that platform can now add clarifying information to videos, such as indicating when a song is meant to be a parody or when older footage is mistakenly presented as a current event. Bluesky, an X competitor founded by that company’s former CEO, Jack Dorsey, has similarly said that it aims to integrate a Community Notes–like feature in the future, according to TechCrunch. 

Community Notes has had a mixed record so far. One study found that across the political spectrum, notes explaining why a post was misleading were perceived as significantly more trustworthy than simple, context-free misinformation labels. Another found that the appearance of a note roughly halves the number of reposts. But there is a catch: “The main issue is that Community Notes are relatively slow (or rather, slow compared to the speed of information dissemination on Twitter),” Thomas Renault, the coauthor of the latter study, wrote on X. While 50 percent of reposts happen within the first five hours, notes take roughly fifteen hours to become public. Thus the damage has often already been done before a correction is made. 

Indeed, a joint investigation by ProPublica and the Tow Center showed that Community Notes didn’t scale sufficiently during the first month of the Israel-Hamas crisis, when false claims based on out-of-context, outdated, or manipulated media proliferated on X. Of about two thousand debunked posts reviewed, 80 percent or so did not have a note appended. And when notes did appear, they typically accrued only a fraction of the original tweet’s views.  

Now the onus is on Community Notes to curb misinformation about the upcoming presidential election. But because users have to agree on notes before they become visible, many notes on politically divisive topics are never made public. In July, the New York Times reported that “Nearly 8,000 fact checks have been drafted about immigration on Community Notes, but only 471 of them have been approved by users and made public on X, according to MediaWise, a media literacy program at the Poynter Institute. Only 4 percent of Community Notes about abortion have been made visible.” 

Researchers seem to agree that while the Community Notes feature has potential, it should not be used in isolation. As ProPublica and Tow’s investigation pointed out, “Community Notes were initially meant to complement X’s various trust and safety initiatives, not replace them.” X, however, seems to have decided that the feature is sufficient to police misinformation about the election, according to the Times. This is, perhaps, also the most convenient approach for Musk, who himself is an active spreader of misinformation on the platform. The same can be said for Trump, the conspiracy theorist Alex Jones, and the misogynistic internet personality Andrew Tate, all three of whom Musk reinstated to the platform despite their prior violations of its rules. By investing in Community Notes, X appears to be putting some energy into content moderation. In reality, though, the feature is struggling to keep up with a mammoth task.


Other notable stories:

  • In July, the Times revealed details of a “secret legal battle” between Rupert Murdoch and his children over the future of the former’s media empire, with Murdoch reportedly attempting to change the terms of a family trust to ensure that control passes to his son Lachlan, who is perceived as more conservative than his siblings. Now a coalition of major news organizations, including the Times and CNN, is urging a court in Nevada to open proceedings in the case to public scrutiny. CNN’s Hadas Gold has more details.
  • And an appeals court upheld a lower-court ruling that the Internet Archive, a nonprofit, breached the copyright of authors and publishers when it removed restrictions on the lending of digital copies of books via an online library during the pandemic. (Mathew Ingram wrote about the case for CJR last year.) Kate Knibbs writes, for Wired, that the appeals court’s decision “could have a significant impact on the future of internet history.”

ICYMI: ‘That’s how you run a debate!’ 9News’s Kyle Clark on holding politicians accountable

Has America ever needed a media defender more than now? Help us by joining CJR today.

Sarah Grevy Gotfredsen is a computational investigative fellow at the Tow Center for Digital Journalism at Columbia University. She works on a range of computational projects on the digital media landscape, including influence operations conducted through news media and the information ecosystem. She graduated from Columbia University in 2022 with an MS degree in data journalism.