Facebook is often blamed for helping to put Donald Trump in the White House, either by giving Russian trolls a platform for disinformation and social engineering during the 2016 election, or by allowing data-harvesting firm Cambridge Analytica to use illicitly acquired information to target American voters. But as serious as those allegations are, they pale in comparison to the kind of damage Facebook has wreaked in dozens of other countries.
In Myanmar, the company gave the military junta and anti-Muslim organizations a platform to spread hate and prejudice against the Rohingya ethnic minority, helping to fuel a genocidal campaign that began in 2017 and has left hundreds of thousands dead or homeless. In India, Facebook-owned WhatsApp has been implicated in a wave of violence, including multiple incidents in which mobs of attackers have beaten innocent men and women to death.
The company has a scale, reach, and influence greater than any corporation, and certainly any media entity, has ever had. Yet Facebook CEO Mark Zuckerberg has admitted in interviews that for the first 10 years of the company’s life, no one thought about the bad things that might arise from connecting billions of people.
In countries like Cambodia and the Philippines, the social network has effectively become the internet, having signed deals with telecom companies to have its app installed by default and offering access to a range of online services through its “Free Basics” program. “They dominate public space in countries where they operate, and people who engage in these spaces are like subjects in the Facebook empire,” says David Kaye, a law professor at the University of California, Irvine, and the UN’s special rapporteur on free expression. (Facebook, which previously operated Free Basics in Myanmar, ended the program there after the violence.)
So what’s to be done? Thus far, Facebook’s response to criticism about disinformation or hate speech is better programming or artificial intelligence, as though these tools could begin to account for or adjust hardwired human tendencies. The social network says it is taking a number of steps to try to solve the disinformation problem, including kicking members of Myanmar’s military off its platform and throttling WhatsApp by restricting the number of people messages can be forwarded to. The company has also said it is committed to removing offensive content as quickly as possible, and it routinely brags about the number of accounts and pages it has taken down that show signs of what is euphemistically called “inauthentic behavior.”
“We are committed to ensuring the accuracy of information on Facebook, our community wants this and so do we,” Anjali Kapoor, director of news partnerships for the Asia Pacific region, wrote in an email to CJR. The company now has a policy that allows it to remove misinformation that it believes “could cause or contribute to offline harm,” and says it is working with a wide range of partners in countries where misinformation and hate speech have become significant problems, including Myanmar and the Philippines.
Would a kind of Facebook “Supreme Court” help fix some of the issues the social network has had? In a blog post in November of last year, Zuckerberg said he was considering creating exactly that: an independent entity made up of representatives from multiple fields, who would hear appeals and rule on decisions around removing content.
“The past two years have shown that without sufficient safeguards, people will misuse [our] tools to interfere in elections, spread misinformation, and incite violence,” the Facebook CEO wrote. “One of the most painful lessons I’ve learned is that when you connect two billion people, you will see all the beauty and ugliness of humanity.” So how does a platform like Facebook give everyone a voice but still keep people safe from harm? “What should be the limits to what people can express?” he asked. “Who should decide these policies and make enforcement decisions?” Over the next year, Zuckerberg said, the company would create an independent body, one whose decisions would be “transparent and binding.”
Many critics expressed skepticism about the idea, since the social network has a history of making promises about changing its behavior only to fall off the wagon. But some were guardedly optimistic: in an essay for The New York Times, law professors Kate Klonick and Thomas Kadri wrote that the idea was promising but warned that Facebook would have to ensure the new body had access to the information it would need to make decisions—something the platform has had trouble providing. Klonick and Kadri also cautioned that the entity would have to be truly independent.
Even Zuckerberg asked for regulators to help, saying he doesn’t believe “individual companies can or should be handling so many of these issues.”
Some have proposed that Facebook must be regulated so that it can’t do as much damage. Even Zuckerberg admitted in the November blog post that the company can’t solve hate speech and disinformation by itself and asked for regulators to help, saying he doesn’t believe “individual companies can or should be handling so many of these issues.” (Some believe Facebook secretly wants regulation as a way of cementing its market dominance, since it would create barriers that smaller or less wealthy companies might not be able to clear.)
But who would do the regulating? Would it be the US, since Facebook is an American company? That would be difficult, and not just because the First Amendment protects speech of almost all kinds. Tech platforms in particular are protected from liability for the content they host by Section 230 of the Communications Decency Act. In some ways, this clause—which was introduced in 1996, years before Facebook or Twitter even existed—provides more protection than does the First Amendment; it protects the platforms from responsibility for any kind of speech by their users, even speech the First Amendment doesn’t protect.
Section 230 came into being as a result of two defamation cases involving online networks in the early days of the internet, one against CompuServe and one against Prodigy. Some US legislators—and early tech companies—were concerned that if anyone could sue for anything that was posted to an online forum, many such services would go out of business and the US would fall behind in the development of the Web. So Senator Ron Wyden cosponsored the clause as a way of protecting them.
In the wake of the recent criticism aimed at Facebook, some lawmakers—including Nancy Pelosi, the Speaker of the House—have talked publicly about modifying Section 230, to reduce some of the protections it provides. But that would require rewriting the legislation, which would almost certainly provoke a bitter fight with supporters of the clause, including the major tech platforms themselves. It would be even harder to change the First Amendment, of course. “The protections of the American Constitution and the demands of countries and consumers around the world are on a collision course,” Jeffrey Rosen of the National Constitution Center told the Times recently.
Into this regulatory vacuum have stepped a number of foreign governments, many of which have passed laws in an attempt to control the spread of misinformation and hate speech in their countries via platforms like Facebook. One of the first was Germany, which introduced a law known as the Netzwerkdurchsetzungsgesetz, or Network Enforcement Act, in 2017. It applies to commercial social networks that have more than 2 million users, and it requires them to delete illegal content (including hate speech, neo-Nazi sentiment, etc.) within 24 hours. Platforms can be fined as much as 50 million euros for failing to comply.
The first applications of the German law didn’t exactly inspire confidence about the prospects of such legislation—in one case, Twitter removed the account of a satirical magazine, and Facebook took down posts by a street artist whose work challenges the Far Right. Nevertheless, the UK is reportedly considering a similar measure, as are a number of other EU countries. Farther afield, Singapore recently introduced a law aimed at criminalizing “fake news,” which critics say is likely to be used primarily to target enemies of the government there, and Russia has introduced legislation that makes it a crime to distribute fake news—or to insult the state. Similar laws exist in Cambodia, Egypt, and Thailand.
In most cases, the countries that have witnessed the most pernicious effects of online hate speech—such as Myanmar—do not have this kind of legislation, perhaps in part because their media is already tightly controlled by the government. When there are outbreaks of violence, the authorities usually either shut down the internet completely or turn off access to social networks like Facebook—as they did in Sri Lanka in April, when terrorist bombings killed more than 300. (Critics argue that doing so can actually wind up exacerbating the problem, because it also removes sources of factual information.)
Is there a way to regulate content on platforms like Facebook without infringing too much on freedom of speech?
Free-speech advocates warn that strict laws like Germany’s encourage the platforms to over-regulate, and therefore threaten to remove a lot of non-hateful speech, since companies face significant penalties for not removing content but don’t suffer any repercussions at all for taking down too much. “We must assume that a lot of content is being removed that could be a freedom of speech violation,” Alexander Fanta, of the Berlin-based internet freedom group Netzpolitik, told BuzzFeed about the German law. Organizations such as Reporters Without Borders and Human Rights Watch have also criticized this kind of approach, with the latter saying such laws “turn private companies into overzealous censors.”
Is there another way out of this dilemma, a way to regulate content on platforms like Facebook without infringing too much on freedom of speech? Kaye points to a group called Article 19 (the name is a reference to the section of the Universal Declaration of Human Rights that has to do with freedom of expression), which has been working on the idea of a social-media council—an independent entity that could be empowered to make decisions about hate speech and other content.
The idea, Kaye says, is “a coregulatory framework that involves private industry and civil society and can be chartered by government but not controlled by government. You could imagine an industry-wide regulatory framework, where companies and users appoint people to evaluate the hard questions, in concert with human rights law and global norms. It would be a mechanism to create oversight that doesn’t involve government deciding the rules in terms of content, since [policing] fake news can very easily veer into censoring speech.”
But again, the question is, Who would sit on this council and make the rules? Would it be a United Nations–style body? In that case, would Russia and China be allowed to have representatives on the panel? What about Turkey? Or Syria? That could become a hornet’s nest for regulators, given the approach many of these countries take to controlling information in their own jurisdictions.
If Facebook’s role in providing a public platform for global violence goes unsolved, the company may be even more inclined to turn inward. According to an announcement Zuckerberg made in March, he already sees the future of Facebook as primarily involving private, encrypted, ephemeral communication, much like what WhatsApp offers.
The risk in turning Facebook communication private, according to experts like Renee DiResta, a disinformation researcher who cowrote the Senate report on Russian trolling activity during the run-up to the 2016 election, is that hate speech will become significantly less visible. While Facebook could determine which messages appear to be going viral by looking at the metadata (how many people are sharing what and where), seeing the actual content of the messages would be impossible. DiResta warns that in many cases, misinformation contained in private messages can be even more persuasive because it comes directly from friends and family.
The size and reach of Facebook, with more than 1.5 billion daily active users, makes it more like a nation than a company. And that suggests it will take the efforts of multiple countries to find a way to regulate the kind of behavior the social network says it is committed to curbing, but is effectively incentivizing. The alternative is too depressing to contemplate: letting the company continue to do whatever it wants, safe in the knowledge that all it has to do is apologize profusely after something terrible happens.
Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.