Art by Somnath Bhatt

How Politics Broke Content Moderation

First came Elon Musk, then the House of Representatives.

June 10, 2024

In 2017, just after Donald Trump was elected president, a research group called the Alliance for Securing Democracy—part of a think tank, the German Marshall Fund of the United States, based in DC and Brussels—released a tool called Hamilton 68. It used network analysis to identify themes in messages spreading from Russian state actors on Twitter through what appeared to be pro-Kremlin civilian profiles. (The name of the tool was derived from the article in the Federalist Papers in which Alexander Hamilton warns of foreign powers’ seeking influence in American politics.) “If you’re looking at the accounts individually, you could anecdotally get a sense of what the narrative thrusts were at any given time,” Bret Schafer, a senior fellow at the Alliance, said recently. “But you might miss the aggregate. This was an attempt to combine all of that into one place, translated into English.”

Schafer later upgraded Hamilton to include more platforms—Facebook, Instagram, Telegram—and state-run accounts from China and Iran. Around the same time, Meta acquired its own research tool, CrowdTangle, which allowed Schafer and his team to better analyze data from Facebook and Instagram. By 2018, consensus was building that there was a serious problem of foreign agents wreaking havoc on social media; the Senate Intelligence Committee commissioned a study. The resulting report—RenĂ©e DiResta, an expert in online manipulation, now at the Stanford Internet Observatory, was the lead author—detailed how Russian agents had posed as Americans while posting divisive content. Across the country, scholars of online disinformation and influence campaigns began to identify a recurring pattern: foreign nations try to sway political discourse online, stoking tensions and provoking rumors; Americans continue to take the bait. “Around the 2020 election,” Schafer said, “it felt like everything was sort of ramping up in terms of civil society—academics, the platforms, parts of the government attempting to work together to get a better understanding of what’s happening online.”

It wouldn’t last. Elon Musk, a self-identified “free-speech absolutist,” acquired Twitter in late 2022 and announced that he would paywall the company’s application programming interface, or API—which had been Hamilton’s “best source of data,” Schafer told me. (The cost of entry, tens of thousands of dollars a month, put the API out of reach for most academic institutions and nonprofits, including Schafer’s.) Musk, who had long been a critic of Twitter’s content-moderation practices, declared that he wanted to create “a common digital town square where a wide range of beliefs can be debated in a healthy manner without resorting to violence”—signaling that he would do less to address false claims on the site. He changed the name Twitter to X, and proceeded to gut its Trust and Safety team, the internal group responsible for moderating content and investigating foreign influence campaigns. He also began releasing what came to be known as the Twitter Files—documents and emails that Musk claimed provided proof the company’s previous leadership had colluded with researchers and federal officials to censor the right. 

Enter right-wing officials. Angered by what they viewed as overzealous and biased policing of online speech, and fueled by their unyielding contention that the 2020 election was stolen from Trump, they began alleging that researchers—Schafer, DiResta, and their peers—were cogs in a government program of repression, silencing conservative voices. In January, Republicans in the House of Representatives established a subcommittee to investigate the “weaponization” of the federal government against the political right. One of their first moves was to send letters to research groups—including Schafer’s and DiResta’s—demanding documentation of any correspondence the organizations had with the federal government or with social media companies about content moderation. Around the same time, a lawsuit filed by the attorneys general of Louisiana and Missouri reached the Supreme Court; the case alleged that the Biden administration had exerted undue influence on social media companies when it asked them to take down COVID-19-related falsehoods and election denialism. The attorneys general argue that this constituted a form of state-sanctioned suppression of speech.

“Sometimes the persistent availability of dangerous and traumatizing content is worse than the potential censorship of speech.”

As the outrage mounted, Meta and YouTube scaled down efforts to label, monitor, and remove potentially harmful content. YouTube rolled back election-disinformation policies that had been put in place in response to 2020-election denialism; both Meta and YouTube allowed Trump back on their platforms, despite his prior terms-of-use violations. (“We carefully evaluated the continued risk of real-world violence, while balancing the chance for voters to hear equally from major national candidates in the run up to an election,” YouTube tweeted, by way of explanation. Meta posted a statement that the company had decided not to extend a two-year suspension period recommended by its oversight board, saying the “risk has sufficiently receded.”) Meta also announced that it would shut down CrowdTangle; a replacement is forthcoming, but it will be accessible only to approved researchers. (Meta says that it has partnered with the Inter-university Consortium for Political and Social Research at the University of Michigan “to share public data from Meta’s platforms in a responsible, privacy-preserving way.”) Tim Harper—a former content-policy manager at Meta, now a senior policy analyst at a nonprofit called the Center for Democracy and Technology (where I once worked)—believes that X’s high-profile policy changes and the ensuing onslaught of incendiary speech on the platform “moved the goalposts on what social media platforms think that they can get away with.” Whatever the situation at Meta and YouTube, he surmised, executives at those companies understand that “Twitter is probably going to be worse.” 

Now researchers are facing a breakdown of the system they had relied on. “Our ability to track trends over time, to understand the volume of posts, has really been diminished,” Schafer told me. He expects that in August, when CrowdTangle comes down, he may need to stop running Hamilton altogether. 

Sign up for CJR's daily email

For years, most researchers and trust and safety teams worked with a tacit understanding that it was reasonable for the government, academics, and tech companies to be in contact. They’d been communicating regularly about content moderation since 2015, when ISIS-related posts and videos began to proliferate online. “The government had some forms of insight into who was running accounts, where, and the platforms had some visibility into who was running the accounts, where,” DiResta told me. “It became clear that we were best served by having those channels of communication be open.” The recent political agita over the extent of that communication has not only stymied research but also distracted from an important debate over the proper balance between online freedom and regulation—and, crucially, which moderation practices actually work in curbing harm.

The partisan drama, Schafer said, has led many in Democratic circles to insist reflexively that the government should be able to communicate with social media companies about threats. And yet: “There’s a fair conversation to have about what the limitations should be,” he told me. “I think if you flipped the script a little bit in terms of which government was currently in office, if these cases had a different flavor to them, there would be a different reaction.”

Efforts to limit disinformation online have sometimes gone awry, only to validate Republican claims of bias. In February, for instance, three former Twitter executives admitted before the House subcommittee investigating collusion that the company had been wrong to temporarily block a New York Post article about the contents of Hunter Biden’s laptop (though they denied that government officials had directed them to do so). The executives argued that Twitter believed the story was part of a coordinated attempt to influence the presidential election. 

And Hamilton itself has been flagged for bias. Emails leaked as part of the Twitter Files show that Yoel Roth, Twitter’s former head of Trust and Safety, worried that the first-generation Hamilton 68 was contributing to “the bot media frenzy.” The problem, Roth wrote, was that the tool conflated Russian-backed actors with regular conservative-leaning Americans: “Virtually any conclusion drawn from it will take conversations in conservative circles on Twitter and accuse them of being Russian.” Schafer told me that he viewed Roth’s response as “a strawman argument” that followed from inveterate media misrepresentation. “Internally, we landed on ‘Russian linked’ to describe the account list, which was an imperfect catchall,” he said. “In external reporting, the data was often attributed to ‘Russian bots and trolls,’ which was clearly more problematic.” In the end, “we couldn’t overcome this misperception.” (The current version of Hamilton tracks only verified state-run accounts.) 

When I spoke to Roth, he said that he still believes in the importance of content moderation, and that there is legitimate communication to be had between companies and the government. “When it comes to cybersecurity threats, governments are going to have substantially more information and access than the private sector will,” he said. “I don’t think that means the private sector just becomes a conduit for intelligence services to dictate content moderation. Platforms should apply their own scrutiny and discretion.” (Roth resigned from X in 2022, after Musk took over.) “Trust and safety work is always about balancing different types of harms and choosing situationally what you think is the most harmful thing that you want to address,” Roth added. “Sometimes the persistent availability of dangerous and traumatizing content is worse than the potential censorship of speech.”

Researchers now have less access to the information they need to make those judgments; they say it’s more difficult to map foreign disinformation campaigns than it has been for years. Recently, DiResta discovered what Meta believes to be a state-run Iranian account; Meta and other social media platforms removed the account, but it’s still operating, including on X. In the past, she would have reached out to X and asked why the account had been left alone, but that channel of communication has dried up. (Whether the account—which mostly posts vaccine conspiracy theories and MAGA content—is part of an official propaganda network remains unclear.) Schafer, for his part, has grown increasingly concerned about China, which appears to have ramped up its information campaigns. But because X is the platform of choice for Chinese diplomats seeking to influence international audiences, his colleagues have little to go on. “It really leaves our China analyst in the dark,” he said. 

Nora Benavidez, a senior counsel at Free Press, a media-focused research group, published a report in December: “How Social-Media Rollbacks Endanger Democracy Ahead of the 2024 Elections.” I asked what she made of the recent developments. “Researchers’ reporting on what’s happening on social media has been one of the most crucial pieces to holding platforms accountable,” she told me. “Ultimately, what we see is that democracy is a much lower priority for these companies than making sure they keep costs in line.”

Editor’s Note: This piece has been updated for clarity about the Alliance for Securing Democracy and the documentation it was asked to provide Congress.

Yona TR Golding is a CJR fellow.