Join us
The Media Today

Section 230 gets its day in court

February 23, 2023
AP Photo/J. Scott Applewhite

Sign up for The Media Today, CJR’s daily newsletter.

For a law whose central clause contains just twenty-six words, Section 230 of the Communications Decency Act of 1996 has generated vast amounts of debate over the past few years. Conservative politicians say the law—which shields online services from liability for the content they host—allows social networks like Twitter and Facebook to censor right-wing voices, while liberals say Section 230 gives the social platforms an excuse not to remove offensive speech and disinformation. Donald Trump and Joe Biden have both spoken out against the law, and promised to change it. This week, the Supreme Court is hearing oral arguments in two cases that could alter or even dismantle Section 230. It’s the first time the nation’s highest court has considered the fate of a law often credited with creating  the modern internet.

On Tuesday, the court’s nine justices heard arguments in the first case, Gonzalez v Google. The family of Nohemi Gonzalez, a US citizen who was killed in an Isis attack in Paris in 2015, claim that YouTube violated the federal Anti-Terrorism Act by recommending videos featuring terrorist groups, and thereby helped cause Gonzalez’s death. On Wednesday, the court heard arguments in the second case, which also involves a terrorism-related death: in that case, the family of Nawras Alassaf, who was killed in a terrorist attack in 2017, claim that Twitter, Facebook, and YouTube recommend content related to terrorism, and thus contributed to his death. After a lower court ruled that the companies could be liable, Twitter asked the Supreme Court to say whether Section 230 applies to it.

The clause at the heart of Section 230 states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In practice, this has meant that services such as Twitter, Facebook, and YouTube are not held to be legally liable for things their users post, whether it’s links or videos or any other content (unless the content is illegal). The question before the Supreme Court is whether that protection extends to content that these services recommend, or promote to users via their algorithms. Section 230, the plaintiffs argue in Gonzalez, “does not contain specific language regarding recommendations, and does not provide a distinct legal standard governing recommendations.”

During their questions on Tuesday, the justices began to grapple with whether there is a way to hold the platforms accountable for recommended content, when the same kinds of algorithms are used to rank search results and other responses to user input. “From what I understand, it’s based upon what the algorithm suggests the user is interested in,” Justice Clarence Thomas said at one point. “Say you get interested in rice pilaf from Uzbekistan. You don’t want pilaf from some other place, say, Louisiana.” Recommendation algorithms, he suggested, are at the heart of how a search engine operates. So how does one make Google liable for one use but not the other?

John Bergmayer, legal director of Public Knowledge, told the Berkman Klein Center that algorithmic recommendations “fit the common law understanding of publication. There is no principled way to distinguish them from other platform activities that most people agree should be covered by 230. The attempt to distinguish search results from recommendations is legally and factually wrong.” If the Supreme Court comes down against the platforms, Bergmayer said, it could limit the usefulness or even viability of many services. The internet, he argued, “might become more of a broadcast medium, rather than a venue where people can make their views known and communicate with each other freely. And useful features of platforms may be shut down.”

Julian Angwin, a journalist and co-founder of The Markup, disagrees. She wrote in her inaugural column for the New York Times that while tech companies claim any limitation to Section 230 could “break the internet and crush free speech,” this isn’t necessarily true. What’s needed is a law drawing a distinction between speech and conduct,” she said. Based on his comments about previous cases involving Section 230, Justice Thomas is itching to try his hand at finding such a distinction. In a decision last March, he said that “assuming Congress does not step in to clarify Section 230’s scope, we should do so,” adding that he found it hard to see why the law should protect Facebook from liability for its own acts and omissions.”

Sign up for CJR’s daily email

In a podcast discussion of the two cases being heard by the Supreme Court, Evelyn Douek—a professor of law at Stanford who specializes in online content—suggested that both options seem like a stretch, because neither one mentions any specific content recommended by YouTube, Facebook, or Twitter that allegedly caused the deaths in question. Her guest, Daphne Keller, the director of platform regulation at the Stanford Cyber Policy Center, agreed. “I don’t even have a good theory about why they would choose such exceedingly convoluted cases,” Keller said. “Maybe it’s just that Justice Thomas had been champing at the bit for so long they finally felt they had to take something, and they didn’t realize what a mess of a case they were taking.”

Even if the Supreme Court decides that Section 230 doesn’t protect the platforms when it comes to terrorist content, that doesn’t mean platforms like Facebook and Twitter are out of options. Online speech experts say they could argue with some justification that the First Amendment protects them against legal liability for the work of their recommendation algorithms. “To the extent that people want to force social media companies to leave certain speech up, or to boost certain content, or ensure any individual’s continuing access to a platform, their problem isn’t Section 230, it’s the First Amendment,” Mary Anne Franks, a professor of law at the University of Miami, said during a conversation on CJR’s Galley discussion platform in 2021.

One problem with that theory, however, is that online platforms might not bother trying to fight such cases at all, because of the difficulty of proving that their behavior is permitted by the First Amendment. Instead, they may decide to just remove content willy-nilly, in case a court finds them liable. The consequences of this “could be catastrophic,” the Washington Post argues. “Platforms would likely abandon systems that suggest or prioritize information altogether, or just sanitize their services to avoid carrying anything close to objectionable.” The result, the Post editorial says, could create “a wasteland.”


Some news from the home front:
 CJR is holding a forum to answer questions about the recent series by reporter Jeff Gerth on Russia and Trump. Here are the details:

A CJR Forum: The president and the press

It’s been a few weeks since CJR published a series by reporter Jeff Gerth critiquing the coverage of Russian attempts to intervene in the 2016 election and the subsequent Trump presidency. We knew at the time that the articles would elicit strong responses. But we also believe that CJR’s role is to air a range of views about the strengths, challenges, and failings of contemporary media. It is in that spirit that we are organizing this town hall. We will answer questions, respond to criticism, and explain our approach to these stories, applying to ourselves the same transparency and accountability that we seek from the institutions CJR covers. The content of the discussion will be guided entirely by the event’s outside, independent moderator. For more than 60 years, the Columbia Journalism Review has stood for clarity and integrity in news. We continue that tradition and invite you to participate in this discussion. —Jelani Cobb, dean, Columbia Journalism School

Who:

* Reporter Jeff Gerth

* CJR Editor and Publisher Kyle Pope

* Columbia Journalism School Dean Jelani Cobb

Moderated by Geeta Anand, dean, Berkeley Graduate School of Journalism

When:

Monday, February 27, 12:45 pm – 2 pm ET via link

RSVPs required. Questions for the moderator can be submitted in advance via this link.


Other notable stories
:

  • A group of New York Times journalists sent a letter on Tuesday to the NewsGuild of New York criticizing a statement from Guild president Susan DeCarava, according to a report in Vanity Fair. In a previous letter, DeCarava defended the right of Times journalists to criticize the paper’s coverage of trans issues. “Factual, accurate journalism that is written, edited, and published in accordance with Times standards does not create a hostile workplace,” the letter from Times journalists reportedly said.
  • NPR announced that it will lay off about ten percent of its current workforce, or about 100 people, and eliminate most vacant positions, David Folkenflik of NPR reported Wednesday. John Lansing, NPR’s CEO, cited the erosion of advertising dollars, particularly for podcasts, and the tough financial outlook for the media industry.
  • Knight Foundation President Alberto Ibargüen announced a $5 million investment in Signal Akron, a new nonprofit news source, to help strengthen local news and civic information in northeast Ohio. Signal Akron will be part of Signal Ohio, which recently launched its first newsroom in Cleveland.
  • There are over 200 e-books in Amazon’s Kindle store listing the artificial intelligence software ChatGPT as an author or co-author, according to a report from Reuters, including “How to Write and Create Content Using ChatGPT,” and a poetry collection called “Echoes of the Universe.” Due to many authors’ failure to disclose, however, Reuters said that it is nearly impossible to get a full accounting of how many e-books have been written by AI software.
  • And the BBC removed a story about actor Will Ferrell from its website after discovering that the story was based on a tweet from a parody account pretending to be the actor, Deadline reported. The Twitter account had a blue check, meaning it was verified, which in the past would have meant that the account belonged to Ferrell. But under Twitter’s new policies, anyone can pay a monthly fee and get a verified account.

ICYMI: How journalists do their work in Iran

Has America ever needed a media defender more than now? Help us by joining CJR today.

Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.