Join us
Analysis

How the law protects hate speech on social media

November 2, 2018
 

Sign up for The Media Today, CJR’s daily newsletter.

Robert Bowers, the man accused of killing 11 people at a Pittsburgh synagogue, apparently used his account on Gab, a social-media platform favored by white supremacists and neo-Nazis, to post in the months before the attack about his hatred of Jews. When that part of the story broke, Gab’s tech infrastructure began to collapse. PayPal banned the platform; Joyent, Gab’s hosting service, dropped it. Medium suspended Gab’s account. GoDaddy gave Gab 24 hours to move its domain to another provider, saying in a statement, “[We] investigated and discovered numerous instances of content on the site that both promotes and encourages violence against people.”

Gab went offline as its founder, the inglorious Andrew Torba, asserted the platform’s blamelessness while trashing journalists on Twitter:

https://twitter.com/getongab/status/1057039629457809408

As of this writing, Gab remains offline. Visitors are greeted by a message that says, in part:

No-platform us all you want. Ban us all you want. Smear us all you want. You can’t stop an idea. As we transition to a new hosting provider Gab will be inaccessible for a period of time. We are working around the clock to get Gab.com back online.

While the Gab story might seem exceptional because of the tragedy and drama surrounding it, it’s typical of the challenges that social-media companies are confronting as they police the content they host, particularly when that content involves hate speech. Facebook and Twitter have struggled, as have YouTube and Instagram.

Sign up for CJR’s daily email

ICYMI: First it was Milo and Alex Jones, now platforms are being de-platformed

As nongovernmental entities, the platforms are generally unconstrained by constitutional limits, including those imposed by the First Amendment. They are mostly free to develop and enforce their content rules and community guidelines as they please. They also have the freedom to decide how to display and prioritize their content using algorithms. Their terms-of-use, which operate effectively as a contract with users, empower the platforms to remove forbidden content, to suspend or deactivate user accounts, and otherwise to address content problems.

In these ways, social-media platforms act as arbiters of free expression, conducting a form of “private worldwide speech regulation” and developing a de facto jurisprudence. As the legal scholar Jeffrey Rosen put it, “[The] lawyers at Facebook and Google and Microsoft have more power over the future of…free expression than any king or president or Supreme Court justice.” Rebecca MacKinnon, the researcher and Internet freedom advocate, once wrote that big Internet companies are the “sovereigns of cyberspace.”

Still, the content policies and practices of social-media platforms are not completely untouched by the law. What does the law say about hate speech online?

Sites that traffic in salacious user-generated content rely on a section of the Communications Decency act to escape liability for that content while making money from it.

The First Amendment provides broad protection to speech that demeans a person or group on the basis of race, ethnicity, gender, religion, age, disability, or similar grounds. At the same time, tort law can be used to redress defamation, intentional infliction of emotional distress, and privacy invasions; and criminal law can be used to redress threats, harassment, stalking, impersonation, extortion, solicitation, incitement, and computer crimes.

For hate speech to be punished, it generally must involve something more than the mere expression of hateful ideas. Coupled with threats or harassment, for instance, hate speech is punishable not because of its naked hatefulness but because it triggers a discrete legal claim. It’s critical, too, that not all speech enjoys First Amendment protection. Certain types—threats, for example—can be regulated because they produce serious harms and make negligible contributions to the public discourse.

Section 230 of the Communications Decency Act, passed in 1996, offers social-media platforms significant protections. The law states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It also states that intermediaries don’t lose their protection even if they moderate content. That means platforms such as Facebook, Twitter, and Gab, are not liable for most of their user-generated content. (There are exceptions for certain criminal and intellectual property claims.)

The takeaway? Pure hate speech is constitutionally protected, and Facebook, Twitter, et al., are not legally responsible for content that a user posted that created liability for defamation, intentional infliction of emotional distress, invasion of privacy, etc., even if the platform moderated some of it. The rationale is that it would be infeasible for the big platforms to review or screen all of the content they host. (Facebook alone has over 2 billion monthly active users). But these protections can reduce incentives for platforms to police hateful and other extreme or offensive content, and they can backstop a bad actor’s business model. Sites that traffic in salacious user-generated content rely on Section 230 to escape liability for that content while making money from it.

While platforms want to retain these protections, a serious reconsideration of their responsibility is underway. As Tarleton Gillespie wrote in June for Wired:

In the US, growing concerns about extremist content, harassment, cyberbullying, and the distribution of nonconsensual pornography (commonly known as “revenge porn”) have tested this commitment to Section 230. Many users, particularly women and racial minorities, are so fed up with the toxic culture of harassment and abuse that they believe platforms should be obligated to intervene. … These calls to hold platforms liable for specific kinds of abhorrent content or behavior are undercutting the once-sturdy safe harbor of Section 230.

In that respect, Gab certainly doesn’t help the platforms’ cause.

ICYMI: Anti-terrorism and hate-speech law catches musicians and students instead

Has America ever needed a media defender more than now? Help us by joining CJR today.

Jonathan Peters is CJR’s press freedom correspondent. He is a media law professor at the University of Georgia, with posts in the Grady College of Journalism and Mass Communication and the School of Law. Peters has blogged on free expression for the Harvard Law & Policy Review, and he has written for Esquire, The Atlantic, Sports Illustrated, Slate, The Nation, Wired, and PBS. Follow him on Twitter @jonathanwpeters.