Sign up for The Media Today, CJRâs daily newsletter.
In 2017, just after Donald Trump was elected president, a research group called the Alliance for Securing Democracyâpart of a think tank, the German Marshall Fund of the United States, based in DC and Brusselsâreleased a tool called Hamilton 68. It used network analysis to identify themes in messages spreading from Russian state actors on Twitter through what appeared to be pro-Kremlin civilian profiles. (The name of the tool was derived from the article in the Federalist Papers in which Alexander Hamilton warns of foreign powersâ seeking influence in American politics.) âIf youâre looking at the accounts individually, you could anecdotally get a sense of what the narrative thrusts were at any given time,â Bret Schafer, a senior fellow at the Alliance, said recently. âBut you might miss the aggregate. This was an attempt to combine all of that into one place, translated into English.â
Schafer later upgraded Hamilton to include more platformsâFacebook, Instagram, Telegramâand state-run accounts from China and Iran. Around the same time, Meta acquired its own research tool, CrowdTangle, which allowed Schafer and his team to better analyze data from Facebook and Instagram. By 2018, consensus was building that there was a serious problem of foreign agents wreaking havoc on social media; the Senate Intelligence Committee commissioned a study. The resulting reportâRenĂ©e DiResta, an expert in online manipulation, now at the Stanford Internet Observatory, was the lead authorâdetailed how Russian agents had posed as Americans while posting divisive content. Across the country, scholars of online disinformation and influence campaigns began to identify a recurring pattern: foreign nations try to sway political discourse online, stoking tensions and provoking rumors; Americans continue to take the bait. âAround the 2020 election,â Schafer said, âit felt like everything was sort of ramping up in terms of civil societyâacademics, the platforms, parts of the government attempting to work together to get a better understanding of what’s happening online.â
It wouldnât last. Elon Musk, a self-identified âfree-speech absolutist,â acquired Twitter in late 2022 and announced that he would paywall the companyâs application programming interface, or APIâwhich had been Hamiltonâs âbest source of data,â Schafer told me. (The cost of entry, tens of thousands of dollars a month, put the API out of reach for most academic institutions and nonprofits, including Schaferâs.) Musk, who had long been a critic of Twitterâs content-moderation practices, declared that he wanted to create âa common digital town square where a wide range of beliefs can be debated in a healthy manner without resorting to violenceââsignaling that he would do less to address false claims on the site. He changed the name Twitter to X, and proceeded to gut its Trust and Safety team, the internal group responsible for moderating content and investigating foreign influence campaigns. He also began releasing what came to be known as the Twitter Filesâdocuments and emails that Musk claimed provided proof the companyâs previous leadership had colluded with researchers and federal officials to censor the right.
Enter right-wing officials. Angered by what they viewed as overzealous and biased policing of online speech, and fueled by their unyielding contention that the 2020 election was stolen from Trump, they began alleging that researchersâSchafer, DiResta, and their peersâwere cogs in a government program of repression, silencing conservative voices. In January, Republicans in the House of Representatives established a subcommittee to investigate the âweaponizationâ of the federal government against the political right. One of their first moves was to send letters to research groupsâincluding Schaferâs and DiRestaâsâdemanding documentation of any correspondence the organizations had with the federal government or with social media companies about content moderation. Around the same time, a lawsuit filed by the attorneys general of Louisiana and Missouri reached the Supreme Court; the case alleged that the Biden administration had exerted undue influence on social media companies when it asked them to take down COVID-19-related falsehoods and election denialism. The attorneys general argue that this constituted a form of state-sanctioned suppression of speech.
âSometimes the persistent availability of dangerous and traumatizing content is worse than the potential censorship of speech.â
As the outrage mounted, Meta and YouTube scaled down efforts to label, monitor, and remove potentially harmful content. YouTube rolled back election-disinformation policies that had been put in place in response to 2020-election denialism; both Meta and YouTube allowed Trump back on their platforms, despite his prior terms-of-use violations. (âWe carefully evaluated the continued risk of real-world violence, while balancing the chance for voters to hear equally from major national candidates in the run up to an election,â YouTube tweeted, by way of explanation. Meta posted a statement that the company had decided not to extend a two-year suspension period recommended by its oversight board, saying the ârisk has sufficiently receded.â) Meta also announced that it would shut down CrowdTangle; a replacement is forthcoming, but it will be accessible only to approved researchers. (Meta says that it has partnered with the Inter-university Consortium for Political and Social Research at the University of Michigan âto share public data from Metaâs platforms in a responsible, privacy-preserving way.â) Tim Harperâa former content-policy manager at Meta, now a senior policy analyst at a nonprofit called the Center for Democracy and Technology (where I once worked)âbelieves that Xâs high-profile policy changes and the ensuing onslaught of incendiary speech on the platform âmoved the goalposts on what social media platforms think that they can get away with.â Whatever the situation at Meta and YouTube, he surmised, executives at those companies understand that âTwitter is probably going to be worse.â
Now researchers are facing a breakdown of the system they had relied on. âOur ability to track trends over time, to understand the volume of posts, has really been diminished,â Schafer told me. He expects that in August, when CrowdTangle comes down, he may need to stop running Hamilton altogether.
For years, most researchers and trust and safety teams worked with a tacit understanding that it was reasonable for the government, academics, and tech companies to be in contact. Theyâd been communicating regularly about content moderation since 2015, when ISIS-related posts and videos began to proliferate online. âThe government had some forms of insight into who was running accounts, where, and the platforms had some visibility into who was running the accounts, where,â DiResta told me. âIt became clear that we were best served by having those channels of communication be open.â The recent political agita over the extent of that communication has not only stymied research but also distracted from an important debate over the proper balance between online freedom and regulationâand, crucially, which moderation practices actually work in curbing harm.
The partisan drama, Schafer said, has led many in Democratic circles to insist reflexively that the government should be able to communicate with social media companies about threats. And yet: âThereâs a fair conversation to have about what the limitations should be,â he told me. âI think if you flipped the script a little bit in terms of which government was currently in office, if these cases had a different flavor to them, there would be a different reaction.â
Efforts to limit disinformation online have sometimes gone awry, only to validate Republican claims of bias. In February, for instance, three former Twitter executives admitted before the House subcommittee investigating collusion that the company had been wrong to temporarily block a New York Post article about the contents of Hunter Bidenâs laptop (though they denied that government officials had directed them to do so). The executives argued that Twitter believed the story was part of a coordinated attempt to influence the presidential election.
And Hamilton itself has been flagged for bias. Emails leaked as part of the Twitter Files show that Yoel Roth, Twitterâs former head of Trust and Safety, worried that the first-generation Hamilton 68 was contributing to âthe bot media frenzy.â The problem, Roth wrote, was that the tool conflated Russian-backed actors with regular conservative-leaning Americans: âVirtually any conclusion drawn from it will take conversations in conservative circles on Twitter and accuse them of being Russian.â Schafer told me that he viewed Rothâs response as âa strawman argumentâ that followed from inveterate media misrepresentation. âInternally, we landed on âRussian linkedâ to describe the account list, which was an imperfect catchall,â he said. âIn external reporting, the data was often attributed to âRussian bots and trolls,â which was clearly more problematic.â In the end, âwe couldnât overcome this misperception.â (The current version of Hamilton tracks only verified state-run accounts.)
When I spoke to Roth, he said that he still believes in the importance of content moderation, and that there is legitimate communication to be had between companies and the government. âWhen it comes to cybersecurity threats, governments are going to have substantially more information and access than the private sector will,â he said. âI don’t think that means the private sector just becomes a conduit for intelligence services to dictate content moderation. Platforms should apply their own scrutiny and discretion.â (Roth resigned from X in 2022, after Musk took over.) âTrust and safety work is always about balancing different types of harms and choosing situationally what you think is the most harmful thing that you want to address,â Roth added. âSometimes the persistent availability of dangerous and traumatizing content is worse than the potential censorship of speech.â
Researchers now have less access to the information they need to make those judgments; they say itâs more difficult to map foreign disinformation campaigns than it has been for years. Recently, DiResta discovered what Meta believes to be a state-run Iranian account; Meta and other social media platforms removed the account, but itâs still operating, including on X. In the past, she would have reached out to X and asked why the account had been left alone, but that channel of communication has dried up. (Whether the accountâwhich mostly posts vaccine conspiracy theories and MAGA contentâis part of an official propaganda network remains unclear.) Schafer, for his part, has grown increasingly concerned about China, which appears to have ramped up its information campaigns. But because X is the platform of choice for Chinese diplomats seeking to influence international audiences, his colleagues have little to go on. âIt really leaves our China analyst in the dark,â he said.
Nora Benavidez, a senior counsel at Free Press, a media-focused research group, published a report in December: âHow Social-Media Rollbacks Endanger Democracy Ahead of the 2024 Elections.â I asked what she made of the recent developments. âResearchersâ reporting on whatâs happening on social media has been one of the most crucial pieces to holding platforms accountable,â she told me. âUltimately, what we see is that democracy is a much lower priority for these companies than making sure they keep costs in line.â
Editor’s Note: This piece has been updated for clarity about the Alliance for Securing Democracy and the documentation it was asked to provide Congress.
Has America ever needed a media defender more than now? Help us by joining CJR today.