Join us

Can AI be sued for defamation? 

March 18, 2024
Image courtesy vpnsrus.com via Wikimedia Commons.

Sign up for The Media Today, CJR’s daily newsletter.

Around the time that ChatGPT began to hallucinate, Eugene Volokh had a moment of clarity. When the noted First Amendment scholar and UCLA Law professor ran some queries about newsworthy individuals in March 2023, ChatGPT generated answers that were both false and defamatory. Specifically, ChatGPT claimed that a public figure, whom Volokh identifies only as R.R., had pleaded guilty to wire fraud, a false allegation that it backed up with an invented Reuters quote. Under a second series of prompts, ChatGPT falsely claimed that several law professors had been accused of sexual harassment. “I started wondering: What are the legal consequences?” Volokh told me. 

Volokh convened a group of legal experts for a virtual symposium on artificial intelligence and free speech, and in August he published an article titled “Liability for AI Output.” His conclusion: ChatGPT, or any AI content provider, is legally liable for defamatory content if certain conditions are met. While the issues are far from settled, the prospect of creating civil liability for AI-generated content could have broad implications for how people get their news. 

The game changer is the consensus, shared by Volokh, that Section 230 of the Communications Decency Act of 1996 does not apply to AI. Section 230 currently provides legal immunity for hosting content generated by others. Because of 230, you can’t sue Google for serving a link that falsely accuses you of murder; and you can’t sue Facebook or Twitter/X if that link circulates on their platforms. 

But AI-generated content is different, because it is produced by the programs themselves. While its answers are assembled from words, ideas, and concepts produced elsewhere, this is true of almost all writing. “If the software is making it up, that’s not protected,” Volokh says. In other words, if ChatGPT falsely accused you of murder, you might have a case. 

There are two possible frameworks for legal action, according to Volokh. The first is under the actual-malice standard first articulated by the US Supreme Court in 1964 in New York Times v. Sullivan. Sullivan protects an enormous range of speech about public figures. But it does not protect statements published with “reckless disregard for the truth.” When a human being is generating the material, it’s sometimes possible to prove that they knew that information was false and published it anyway. That’s not the case with AI content, which is created by machine. However, if an AI company were alerted that their program was generating specific false and libelous content and took no action, then it would be acting with reckless disregard for the truth, Volokh believes. That would open the door for legal action. 

Separately, Volokh argues, AI companies may be liable for negligence if there are flaws in the product design that cause it to generate defamatory content. That would particularly be the case for private individuals who suffer demonstrable harm, such as loss of income or employment. 

The theories may be tested in court. Several libel claims have been filed against AI, including one by technologist Jeffery Battle, who is suing Microsoft in Maryland because a Bing search using ChatGPT confused him with Jeffrey Battle, a convicted terrorist. 

Sign up for CJR’s daily email

RonNell Andersen Jones, a law professor at the University of Utah and a senior visiting research fellow at the Knight First Amendment Institute at Columbia, agrees that there is a path to creating legal liability for AI companies. But she sees the challenge as shifting the legal focus onto the mental state of AI creators. “We will surely find a way to impose liability for the reputational harms that come from AI-generated libel,” Andersen Jones told me. “The fault structure within our current defamation liability regime entirely presupposes a real human speaker with a real human state of mind. It just isn’t an immediately neat fit for this new technological reality. Judges and legislators have a big task ahead of them as we work to map old principles onto a new communications landscape.”

There is a risk, of course, that creating lability for AI content will promote a kind of automated self-censorship. For example, Gemini, the new Google AI product, won’t answer questions about elections or political candidates. (Its response: “I’m still learning how to answer this question. In the meantime, try Google Search.”) ChatGPT, meanwhile, caveats answers to political questions with qualifications and bland language, noting that such answers are subjective “and depend on individual perspectives and interpretations.”

But introducing greater liability for AI also presents the possibility of a fundamental recalibration. The goal of the Supreme Court in the 1964 Sullivan decision, articulated by Justice William Brennan, was to ensure that public debate is “uninhibited, robust, and wide open,” but with legal consequences for those who deliberately publish false information. The protections offered by Section 230 have been essential to innovation and growth in the digital sphere, but the lack of legal guardrails has at times produced a kind of information chaos that is antithetical to informed public debate. 

Will mediation carried out by large language models with some level of legal accountability produce better outcomes? Like everything about AI, it’s impossible to know at this juncture. But I hope we can create liability for AI sooner rather than later. It’s not just a question of fairness. Legal liability for content moderators with all the necessary carve-outs and qualifications is essential for informed democratic debate. And it’s been missing for too long.

Has America ever needed a media defender more than now? Help us by joining CJR today.

Joel Simon is the founding director of the Journalism Protection Initiative at the Craig Newmark Graduate School of Journalism.