Sign up for The Media Today, CJRâs daily newsletter.
Misinformation and disinformation have arguably never been as prominent or widely distributed as they are now, thanks to smartphones, the social Web, and apps such as Facebook, X (formerly Twitter), TikTok, and YouTube. Unfortunately, as the US draws closer to a pivotal election in which trustworthy information is likely to be more important than ever, various researchers and academic institutions are scaling back or even canceling their misinformation programs, due to legal threats and government pressure. At the same time, a number of large digital platforms have laid off hundreds or even thousands of the employees who specialized in finding and removing hoaxes and fakes, in some cases leaving only a skeleton staff to handle the problem. And all of this is happening as the quantity of fakes and conspiracy theories is expanding rapidly, thanks to cheap tools powered by artificial intelligence that can generate misinformation at the click of a button. In other words, a perfect storm could be brewing.
Over the weekend, Naomi Nix, Cat Zakrzewski, and Joseph Menn described, in the Washington Post, how academics, universities, and government agencies are paring back or even shutting down research programs designed to help counter the spread of online misinformation, because of what the Post calls a “legal campaign from conservative politicians and activists, who accuse them of colluding with tech companies to censor right-wing views.” This campaignâwhich the paper says is being led by Jim Jordan, the Republican congressman from Ohio who chairs the House Judiciary Committee, and his co-partisansâhas “cast a pall over” programs that study misinformation online, the Post says. Jordan and his colleagues have issued subpoenas demanding that researchers turn over their communications with the government and social media platforms as part of a congressional probe into alleged collusion between the White House and the platforms.
The potential casualties of this campaign include a project called the Election Integrity Partnership, a consortium of universities and other agencies, led by Stanford and the University of Washington, that has focused on tracking conspiracy theories and hoaxes about voting irregularities. According to the Post, Stanford is questioning whether it can continue participating because of ongoing litigation. (âSince this investigation has cost the university now approaching seven [figure] legal fees, itâs been pretty successful, I think, in discouraging us from making it worthwhile for us to do a study in 2024,â Alex Stamos, a former Facebook official who founded the Stanford Internet Observatory, said.) Meanwhile, the National Institutes of Health shelved a hundred-and-fifty-million-dollar program aimed at correcting medical misinformation because of legal threats. In July, NIH officials reportedly sent a memo to employees warning them not to flag misleading social media posts to tech companies.
As I wrote for CJR last week, contacts between government agencies and the platforms are also at the heart of a lawsuit that is currently working its way through the court system. The case began last year, when the attorneys general of Louisiana and Missouri sued the Biden administration, alleging that their discussions with Meta, X, and YouTube breached the First Amendment in that they coerced those platforms into removing speech. In July, a Louisiana appeals court judge ruled in favor of the states, and ordered the administration to stop talking with the platforms; he also ordered that government agencies stop working with academics who specialize in disinformation. That order was amended by the Fifth Circuit Court of Appeals, and the Biden administration has asked the Supreme Court to hear the case, but the brouhaha appears to have contributed to an atmosphere of fear about the repercussions of misinformation research.
In addition to this case and the House investigation, Stephen Miller, a former Trump adviser who runs a conservative organization called the America First Legal Foundation, is representing the founder of Gateway Pundit, a right-wing website, in a lawsuit alleging that researchers at Stanford and other institutions conspired with the government to restrict speech. And Elon Musk, the owner of X, is suing the Center for Countering Digital Hate, a nonprofit advocacy group that Musk alleges has scraped large amounts of data from the platform without proper permission, as part of what Musk calls a conspiracy to persuade advertisers not to spend money there. A researcher who asked not to be named told the Post that as a result of such attacks, the whole area of misinformation research “has become radioactive.”
While lawsuits and investigations are chilling research into misinformation, the platforms are simultaneously devoting fewer resources to finding or removing fakes and hoaxes. Earlier this month, Nix and Sarah Ellison wrote in the Post that tech companies including Meta and YouTube are “receding from their role as watchdogs” aimed at protecting users from conspiracy theories in advance of the 2024 presidential election, in part because layoffs have “gutted the teams dedicated to promoting accurate information” on such platforms. Peer pressure may have played a role, too: according to the Post, Meta last year considered implementing a ban on all political advertising on Facebook, but the idea was killed after Musk said he wanted X, a key Meta rival, to become a bastion of free speech. As Casey Newton wrote in his Platformer newsletter in June, one function that Musk seems to have served in the tech ecosystem is to “give cover to other companies seeking to make unpalatable decisions.”
Emily Bell, the director of the Tow Center for Digital Journalism at Columbia University, told the Post that Musk “has taken the bar and put it on the floorâ when it comes to trust and safety. Not to be outdone, Meta has reportedly started offering users the ability to opt out of Facebook’s fact-checking program, which means false content would no longer have a warning label. And YouTube announced in June that it would no longer remove videos claiming that the 2020 presidential election was stolen. The Google-owned video platform wrote in a blog post that while it wants to protect users, it also has a mandate to provide âa home for open discussion and debate.â While removing election-denying content might curb the spread of misinformation, the company said, it could also âcurtail political speech without meaningfully reducing the risk of real-world harm.â Citing similar reasons, Meta and other platforms have reinstated Trumpâs accounts after banning him following the January 6 insurrection.
In a report released last week, the Center for Democracy and Technology, a DC-based nonprofit, wrote that the platforms have become less communicative since the 2020 election, especially after the widespread layoffs, and in some cases have loosened safeguards against election misinformation to such an extent that they have âessentially capitulate[d] on the issue.” At Meta, for example, the Center said interviews with researchers indicated that Mark Zuckerberg, the CEO, at some point “stopped considering election integrity a top priority and stopped meeting with the elections team.” The New York Times reported in early 2023 that cuts of more than twelve thousand staff at Alphabet, Google’s parent company, meant that only a single person at YouTube was in charge of misinformation policy worldwide.
While all this has been going on, researchers who specialize in artificial intelligence say that the ubiquity of such tools threatens to increase the supply of misinformation dramatically. At least half a dozen online services using variations on software from OpenAI or open-source equivalents can produce convincing fake text, audio, and even video in a matter of minutes, including so-called “deepfakes” that mimic well-known public figures. And this kind of content is cheap to produce: last month, Wired talked to an engineer who built an AI-powered disinformation engine for four hundred dollars.Â
Earlier this month, the BBC wrote about how YouTube channels that use AI to make videos containing fake content are being recommended to children as “educational content.” The broadcaster found more than fifty channels spreading disinformation, including claims around the existence of electricity-producing pyramids and aliens. Sara Morrison, of Vox, has written about how “unbelievably realistic fake images could take over the internet” because “AI image generators like DALL-E and Midjourney are getting better and better at fooling us.” When Trump was charged in New York earlier this year, fake photos showing his arrest went viral; Trump even shared an AI-generated image of his own. (Ironically, some of the fake pictures were created by Eliot Higgins, the founder of the investigative journalism outfit Bellingcat, as a warning that such images are easy to create.) Bell wrote for The Guardian that “ChatGPT could be disastrous for truth in journalism” and create a “fake news frenzy.” Sam Gregory, program director of Witness, a human rights group with expertise in deepfakes, told Fast Company of an emerging combined risk of “deepfakes, virtual avatars, and automated speech generation,â which could produce large quantities of fake information quickly. The list goes on.
It should be noted that not everyone is as concerned about misinformation (or AI, for that matter) as these comments might suggest; in January, researchers from Sciences Po, a university in Paris, published a study saying that the problem is often overstated. (“Falsehoods do not spread faster than the truth,” they wrote, adding that âsheer volume of engagement should not be conflated with belief.â) And content moderationâand the governmentâs role in it, in particularâraises some legitimately thorny issues around freedom of speech. But misinformation is a real problem, even if one can debate the extent, and thorny debates are no excuse for political intimidation. We donât want a world in which those who are best equipped to fight misinformation, and answer the thorny questions, have either lost their jobs or are too scared to speak out for fear of a lawsuit.Â
Other notable stories:
- Last night saw the second Republican primary debate of the 2024 presidential election cycle, hosted by Dana Perino, of Fox News, Stuart Varney, of Fox Business, and Ilia CalderĂłn, of Univision, at the Ronald Reagan Presidential Library in California. Trump was again absent; Semaforâs Max Tani reported ahead of time that ad prices were down on the first debate, reflecting, as one buyer put it, that the primary has become a âsnoozer.â Afterward, the moderators got some tough reviews; Vox wrote that it was as if they were âtrying to conduct âgotchaâ interviews with seven people simultaneously.â
- Travis King, the US Army private who fled into North Korea in July, was on his way back to the US last night after North Korean officials decided to expel him and the Swedish government mediated his release, via China. As we wrote after King absconded, it seemed possible that North Korea would flaunt him as a propaganda assetâit cast past US military defectors in movies vilifying the Westâbut the APâs Kim Tong-hyung writes that North Korean officials likely concluded that Kingâs propaganda value was limited.
- In the UK, the right-wing TV network GB News is in crisis after one of its hosts, Laurence Fox, made derogatory on-air comments about a female journalist. Dan Wootton, another host, smirked at Foxâs remarks but later apologized and condemned Fox, leading Fox to publicly turn on Wootton; both men have now been suspended. Wootton recently faced separate allegations of sexual misconduct. (He described them as a smear campaign.)
- Rodrigo Abd, a photographer with the Associated Press, toured Afghanistan and took pictures with a wooden box camera, a onetime fixture of the countryâs streets that has become obsolete in the digital age. During the Talibanâs last stint in power, officials strictly banned photography of humans and animalsâbut they have allowed some photography recently, and Abd found that his old-fashioned device disarmed his subjects.
- And Linda Yaccarino, the CEO of X, sat for a tense interview with CNBCâs Julia Boorstin at the Code Conference in California. Questioned about earlier claims by Yoel Roth, a former X staffer, that the platform is failing to stop harassment, Yaccarino said that X had previously been âcreepingâ toward censorship. And when she asked attendees, âWho wouldnât want Elon Musk sitting by their side running product?â many of them laughed.
ICYMI: Rick Perlstein on Hunter Biden and the echoes of Jimmy Carterâs brother
Has America ever needed a media defender more than now? Help us by joining CJR today.