Join us
Analysis

Section 230 heads to the Supreme Court

October 6, 2022
A general view of the U.S. Supreme Court, in Washington, D.C., on Wednesday, September 21, 2022. (Graeme Sloan/Sipa USA)(Sipa via AP Images)

Sign up for The Media Today, CJR’s daily newsletter.

For the past several years, critics across the political spectrum have argued that Section 230 of the Communications Decency Act of 1996 gives social media platforms such as Facebook, Twitter, and YouTube too much protection from legal liability for the content they host. Conservative critics argue, despite a lack of evidence, that Section 230 allows social media companies to censor like-minded thinkers and groups without recourse, and liberal critics say the platforms use Section 230 as an excuse not to remove things they should be taking down, such as misinformation and hate speech. Before the 2020 election, Joe Biden said he would abolish Section 230 if he became president; since taking office, he has made similar statements, including that the clause “should be revoked immediately.”

This week, the Supreme Court announced that it would hear two cases that are looking to chip away at Section 230 legal protections. At the core of one case is the claim that Google’s YouTube service violated the federal Anti-Terrorism Act by recommending videos featuring the isis terrorist group, and that these videos helped lead to the death of Nohemi Gonzalez, a twenty-three-year-old US citizen who was killed in an isis attack in Paris in 2015. In the lawsuit, filed in 2016, Gonzalez’s family claims that while Section 230 protects YouTube from liability for hosting such content, it doesn’t protect the company from liability for promoting that content with its algorithms. The second case involves Twitter, which was also sued for violating the Anti-Terrorism Act; the family of Nawras Alassaf claimed isis-related content on Twitter contributed to his death in a terrorist attack in 2017.

In recent years, the Supreme Court has declined to hear similar cases—including, in March, a decision by a lower court that found Facebook was not liable for helping a man traffic a woman for sex. While Justice Clarence Thomas agreed with the decision not to hear that case, he also wrote that the court should consider the issue of “the proper scope of immunity” under Section 230. “Assuming Congress does not step in to clarify Section 230’s scope, we should do so in an appropriate case,” Thomas wrote. “It is hard to see why the protection that Section 230 grants publishers against being held strictly liable for third parties’ content should protect Facebook from liability for its own ‘acts and omissions.’”

Thomas has made similar comments in a number of other decisions. In 2020, the Supreme Court declined to hear a case in which Enigma Software argued that MalwareBytes, an internet security company, should be liable for calling Enigma’s products malware. Although he agreed with that decision, Thomas went on at length about what he described as a movement to use Section 230 to “confer sweeping immunity on some of the largest companies in the world.” He also suggested he agreed with an opinion from a lower-court judge, in a case in which Facebook was sued for terrorist content. The opinion said it “strains the English language to say that in targeting and recommending these writings to users…Facebook is acting as ‘the publisher of information provided by another information content provider.'” 

Jeff Kosseff, a cybersecurity law professor at the US Naval Academy and the author of a book on Section 230, told the Washington Post that, with the Supreme Court considering these questions, “the entire scope of Section 230 could be at stake.” The Post also noted that it will be the first time the court has considered whether there is a distinction between content that is hosted and content recommended by algorithms. Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told the Post that such a division is actually a “false dichotomy,” and that the process of recommending content is one of the traditional editorial functions of a social media network. In that sense, he told the Post, “the question presented goes to the very heart of Section 230.”

While Section 230 gets most of the attention, it isn’t the only protection the platforms have. A feature on hate speech in the New York Times described Section 230 as the main reason why such speech exists online, but later added a correction clarifying that the First Amendment also protects online speech. Even if the Supreme Court decides Section 230 doesn’t protect the platforms when it comes to terrorist content, Facebook and Twitter could argue with some justification that the First Amendment does. “To the extent that people want to force social media companies to leave certain speech up, or to boost certain content, or ensure any individual’s continuing access to a platform, their problem isn’t Section 230,” Mary Anne Franks, a professor of law at the University of Miami, said during a discussion of Section 230 on CJR’s Galley platform last year. “It’s the First Amendment.”

Sign up for CJR’s daily email

This argument is at the heart of another case the Supreme Court was recently asked to hear, involving a Florida law designed to control how the platforms moderate content. The law was struck down by the Eleventh Circuit Court of Appeals in May as unconstitutional, since, the court ruled, moderation decisions are an exercise of the platforms’ First Amendment rights. A similar law passed in Texas, however, was upheld in a decision earlier this month, one that explicitly rejected the First Amendment defense. Now the Supreme Court has the opportunity to decide the extent to which Section 230 and the First Amendment cover the platforms’ moderation and content choices.

Here’s more on Section 230:

  • Free expression: In 2020, Jack Dorsey, then the CEO of Twitter, and Mark Zuckerberg, the CEO of Facebook, warned the Senate that curtailing the protections of Section 230 could harm free expression on the internet. Dorsey said it could “collapse how we communicate on the internet” and leave “only a small number of giant and well-funded” tech firms, while Zuckerberg said that, “without Section 230, platforms could potentially be held liable for everything people say” and could “face liability for doing even basic moderation, such as removing hate speech and harassment.”
  • Out of date: Michael Smith, professor of information technology at Carnegie Mellon University, and Marshall Van Alstyne, a business professor at Boston University, wrote in an essay for the Harvard Business Review last year that Section 230 needs to be updated because it was originally drafted “a quarter century ago during a long-gone age of naïve technological optimism and primitive technological capabilities,” and its protections are now “desperately out of date.” When you grant platforms complete immunity for the content that their users post, Smith and Van Alstyne argue, “you also reduce their incentives to proactively remove content causing social harm.”
  • Narrow path: Daphne Keller, a former associate counsel at Google who directs the Program on Platform Regulation at Stanford’s Cyber Policy Center, wrote in a paper published by the Knight First Amendment Institute at Columbia University that the desire to regulate recommendation or amplification algorithms is understandable, but a long way off. “Some versions of amplification law would be flatly unconstitutional in the US,” she writes. “Others might have a narrow path to constitutionality, but would require a lot more work than anyone has put into them so far.”
  • Sowing seeds: In 2019, Eric Goldman argued that while Section 230 protects giant platforms such as Facebook and Twitter, it also sows the seeds of their eventual destruction by making it easier for startups to compete. “Due to Section 230’s immunity, online republishers of third-party content do not have to deploy industrial-grade content filtering or moderation systems, or hire lots of content moderation employees, before launching new startups,” Goldman says. “This lowers startup costs generally; in particular, it helps these new market entrants avoid making potentially wasted investments in content moderation before they understand their audience’s needs.”

 

Other notable stories:

  • Jean Damascène Mutuyimana, Niyodusenga Schadrack, and Jean Baptiste Nshimiyimana, three Rwandan journalists arrested by state authorities in October 2018 and held in pretrial detention for four years, have been acquitted. The journalists, who all worked for Iwacu TV, had been charged with “spreading false information with the intention of inciting violence and tarnishing the country’s image,” Al Jazeera reported. Yesterday, a court ruled that “there is no evidence to prove that their publication incited violence.” 
  • Earlier this week, the Taliban shut down two news websites in Afghanistan, reports the Committee to Protect Journalists. Both websites are operated by Afghan journalists living in exile. Hasht-e Subh has since resumed operations under a different name, while Zawia News has shifted its content to its parent company, Zawia Media.  
  • Facebook is ending Bulletin, its newsletter subscription service, after fifteen months in operation. According to the New York Times, executives told staff in July that they would shift resources away from the newsletter. The service aimed to compete with Substack by attracting both emerging and high-profile writers and helping them to build a following with Facebook’s publishing and legal support. Last year, Facebook said it committed $5 million to Bulletin’s local news writers and offered writers contracts extending into 2024.
  • A new report from the Government Accountability Office concludes that Latinos are underrepresented in the media industry and are more likely to perform service roles. The GAO found that Latinos make up 12 percent of the US media workforce and only 4 percent of media management. 
  • In The New Yorker, Kevin Lozano writes about Mark Bergen’s Like, Comment, Subscribe, a book about the history and evolution of YouTube. While the platform has over two billion users and is one of the most popular sites among teenagers in particular, Lozano writes that the Google-owned service is often forgotten when reporters look at the flaws of social media giants such as Facebook and Twitter. Lozano argues that, despite its scandals, YouTube survives because it has become “too useful and too ubiquitous to fail.”
  • And LaFontaine Oliver, the current president and CEO of Baltimore’s NPR station, WYPR, has been named the next CEO of New York Public Radio, Gothamist reports. Oliver succeeds Goli Sheikholeslami, NYPR’s former CEO, who left nearly a year ago to lead Politico Media Group. Oliver will be the first Black person to serve as CEO of New York Public Radio.

Has America ever needed a media defender more than now? Help us by joining CJR today.

Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.