Join us
The Media Today

Meta let researchers study whether it stokes polarization. The results were polarizing.

August 3, 2023
 

Sign up for The Media Today, CJR’s daily newsletter.

For much of the past decade, academic researchers have been trying to persuade Meta, the company formerly known as Facebook, to share internal data about the behavior of users on its platforms, so that they might understand how—if at all—the sites’ algorithms influence people’s political views and behavior. The company suggested that it might offer such access; back in 2018, it even launched a project designed to share data. But the amount of usable information it ended up offering to researchers was minuscule and, in some cases, significantly flawed. As I reported for CJR two years ago this month, Meta also thwarted attempts by social scientists to collect their own data through scraping, and even disabled the accounts of some researchers. All this left the impression that the company had no interest in facilitating academic scrutiny.

It was more than a little surprising, then, when social scientists last week published not one but four new studies based on user data that Meta had shared with them, part of a research project that the company launched in 2020 to analyze users’ behavior both during and immediately after that year’s presidential election. Meta provided twenty million dollars in funding (the company did not pay the researchers involved directly), and the project was coordinated by the University of Chicago’s National Opinion Research Center, a nonpartisan organization that also helped to collect and distribute some of the data. The research was initially scheduled to be released in the summer of 2021, but was delayed a number of times; the lead researchers said that the job of sorting and analyzing all the data was “significantly more time-consuming” than they had expected. The January 6 riot at the Capitol also extended the project’s timeline. 

According to several of the researchers involved and an independent observer of the process—Michael W. Wagner, a professor of journalism and communication at the University of Wisconsin–Madison—Meta provided virtually all the data that they requested, and did not restrict or try to influence the research. A number of Meta staffers are named as coauthors of the papers. And the project isn’t done yet—another twelve research projects are set to drop soon.

In three of the four new studies, which were published in the journals Science and Nature, researchers from institutions including the University of Texas, New York University, and Princeton modified how Facebook functions in a number of different ways in an attempt to determine whether the site and its algorithms influenced users’ political beliefs or behavior. One study, which was based on data from more than twenty thousand Facebook users and a similar number of Instagram users, replaced the normal algorithm used to sort the news feed and instead showed users a reverse-chronological feed (one in which more recent posts appear first). As Fast Company notes, this was among the reforms endorsed by Frances Haugen, a former Facebook staffer turned high-profile whistleblower. At the time, the idea seemed also to appeal to a number of members of Congress.

A different paper examined whether limiting a user’s ability to share another’s post could lead to changes in their political beliefs, since this type of behavior often involves viral content that is more likely to be misleading. The researchers behind another paper tried limiting the amount of content to which users were exposed by friends or by Facebook pages and groups with which they sympathized, the idea here being that such content can entrench different beliefs and behavior than might be the case if they were exposed to different political views. In the fourth paper, researchers analyzed which news stories made it into the feeds of Facebook users in the US and correlated this with how liberal or conservative the users were.

So did all this research show that Facebook’s algorithms changed people’s political behavior or beliefs? According to Meta, it emphatically did not. Nick Clegg, the company’s president of global affairs, wrote in a blog post that while questions about the impact of social media on political attitudes and behavior are not settled, the studies add to what he described as a “growing body of research showing there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization.” 

Sign up for CJR’s daily email

As The Atlantic’s Kaitlyn Tiffany noted, however, while the research comes with “legitimate vetting,” Clegg’s conclusion is fraught: “an immensely powerful company that has long been criticized for pulling at the seams of American democracy—and for shutting out external researchers—is now backing research that suggests, Hey, maybe social media’s effects are not so bad.” And, according to a report in the Wall Street Journal, some of the researchers involved in the project disagreed strenuously with Clegg’s characterization, as did Wagner, the impartial observer; along with officials from the journal Science, they stated (in the Journal’s words) that Meta was “overstating or mischaracterizing some of the findings.” Science headlined its print package about the research “Wired to Split.” Meta reportedly took issue with this, asking that a question mark be added to imply that the question was not settled, but Science told the company (also per the Journal) that it considered “its presentation of the research to be fair.” 

This disagreement was fueled in part by the complexity of the results that the research threw up. In the study that replaced an algorithm-powered feed with a chronological one, for example, users spent less time on Facebook. As a result, they were exposed to less content that reinforced their existing beliefs, which could be seen as a positive, and less polarizing, experience. At the same time, these users also saw a substantially higher number of posts from untrustworthy sources, which is a somewhat less desirable outcome. And the researchers found that neither of these changes had a perceptible impact on polarization, political awareness, or political participation.

The study in which users’ ability to re-share content was limited showed that those users saw a dramatically smaller number of posts from untrustworthy sources—but it also reduced the amount of political news that they saw, which led to lower levels of political knowledge, an outcome that might also be seen as negative. As in the chronological-feed study, limiting re-sharing seemed to have no perceptible impact on polarization. And the study that reduced the amount of content that users saw from like-minded accounts also showed no effect on polarization or the extremity of people’s views. As Fast Company noted, when users did see posts from like-minded sources, they were even more likely to engage with them, “as if being deprived of that content made them even more hungry for it.”

In addition to this complexity, the studies have been criticized on methodological and other grounds. Various critics of Meta noted that the findings only apply to a limited time period around the 2020 election, and that Meta’s content policies have since evolved. David Garcia, a professor at the University of Konstanz in Germany, wrote in Nature that, as significant and broad-reaching as the studies may have been from a research point of view, they do not foreclose the possibility that Facebook’s algorithms do contribute to political polarization; Garcia told Tiffany that the experiments were conducted at the individual level, whereas polarization is “a collective phenomenon.” To prove that algorithms do not play a role would be much harder, Garcia said—if it’s even possible at all. And Casey Newton wrote, in his Platformer newsletter, that the studies are consistent with “the idea that Facebook represents only one facet of the broader media ecosystem.” 

For me, this is the most compelling takeaway from the studies: it’s difficult, if not impossible, to show that Facebook did or didn’t change users’ political attitudes, because it is impossible to separate what happens on Facebook from what happens beyond it. As Newton points out, Facebook may have removed election and other strands of disinformation in 2020, but “election lies still ran rampant on Fox News, Newsmax, and other sources.” In the end, as Newton writes, “the rot in our democracy runs much deeper than what you find on Facebook.” 


Other notable stories:

  • Donald Trump’s latest indictment, on charges stemming from his efforts to overturn the 2020 election, continues to reverberate across the media landscape. As we noted in yesterday’s newsletter, a media race was on to identify the sixth unnamed coconspirator of Trump; later, the New York Times reported that it appears to be Boris Epshteyn. The Times also reported that, shortly after being indicted, Trump dined with top executives at Fox News who urged him to participate in the presidential debate that the network is hosting later this month. Writing for the Washington Post, Philip Bump jumped off of the dinner to illustrate fresh complexities in the Trump-Fox relationship. And Lorraine Ali, of the LA Times, assessed the broader complexities of TV coverage of Trump’s case.
  • Recently, Yanping Chen—a Chinese American scientist who was investigated, but never charged, as part of a federal counterintelligence probe—moved to force Catherine Herridge, who reported on the probe for Fox News (and is now at CBS), to reveal her sources as part of a privacy claim that Chen is bringing against the FBI. This week, a federal judge sided with Chen and ordered Herridge to participate in a deposition on the matter. As CNN’s Oliver Darcy reports, the ruling “has alarmed press advocates, who worry that it might set a chilling precedent impacting the entire news media.” Ted Boutrous, a First Amendment lawyer, said that the judge struck the wrong balance.
  • Also this week, the White House press office finished the process of renewing reporters’ “hard passes”—press credentials that grant broad access to holders—for the first time since tightening their eligibility criteria. According to Politico’s West Wing Playbook newsletter, the White House denied that the new criteria were written so as to deny a hard pass to Simon Ateba—the White House correspondent for (and, apparently, sole employee of) Today News Africa, whose disruptive conduct has attracted attention of late—but he does no longer have one. (He will still be able to attend briefings.)
  • In yesterday’s newsletter, we noted a report in Semafor suggesting that Jeffrey Goldberg, the top editor at The Atlantic, was poised to become the moderator of Washington Week, on PBS, succeeding Yamiche Alcindor, who stepped down as host earlier this year. Later yesterday, PBS confirmed both Goldberg’s appointment and that The Atlantic will now coproduce the show, which will be renamed to reflect the magazine’s involvement. Benjamin Mullin has more details for the Times.
  • And Block Club Chicago, a nonprofit news organization, closed a bilingual coronavirus hotline that it launched in 2020 to help connect residents to needed resources, citing the state of the pandemic and demand for the service, which has fielded five questions this year, down from a peak of over six hundred in 2021. Block Club insisted, however, that the closure doesn’t mean COVID is gone, and pledged to continue helping residents.

ICYMI: Trump whistleblower Miles Taylor on being caught in the media machine

Has America ever needed a media defender more than now? Help us by joining CJR today.

Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.