The Media Today

Facebook pulls back the curtain on what kinds of speech it tolerates

April 25, 2018
 

Last year, The Guardian published leaked documents believed to be Facebook’s internal rules on how and when to moderate inappropriate content. The list of permitted terms caused significant controversy, because it included threats of violence toward women, children, and various ethnic groups, which Facebook said should be allowed to remain as long as the threats were not too specific. Harassment of white men, however, was not tolerated because they were defined as a “protected group.” The guidelines sparked an ongoing debate over how Facebook makes decisions about which kinds of speech it will censor, and which it won’t.

On Tuesday, the giant social network finally gave in to pressure from critics and published the community standards guidelines it says it uses to make most of its content decisions, with categories ranging from “violence and criminal behavior” to “integrity and authenticity.” The company said in a post introducing the rules that it generally errs on the side of allowing content, even when some find it objectionable, “unless removing that content can prevent a specific harm.” Facebook also said it often allows content that technically violates its standards “if we feel that it is newsworthy, significant, or important to the public interest.”

Trending: “I was in shock. My eyes just went so wide and I’m pretty sure my mouth was open” 

Some of the company’s rules are fairly straightforward, such as not allowing people to sell drugs or firearms. But much of what the social network is trying to do amounts to pinning Jell-O to the wall, especially when it comes to censoring speech around violence. The blog post says that Facebook considers “the language, context and details” to determine when content represents a “credible threat to public or personal safety.” But drawing those kinds of sharp lines is incredibly difficult, especially given the billions of posts Facebook gets every day, which explains why the company gets so much criticism from users.

Sign up for CJR's daily email

In an attempt to address some of those complaints, Facebook also announced it is introducing an official appeal process that will allow users to protest the removal of content or blocking of accounts. Until now, anyone who had content removed had to try to reach a support person via email to a general Facebook account, or through posts on social media. Facebook says the new process will allow users to request a review of the decision and get a response within 24 hours. Appeals will start being allowed for content involving nudity, hate speech, and graphic violence, with other content types added later.

Facebook’s new transparency around such issues is admirable, but troubling questions remain about how much power the social network has over the speech and online behavior of billions of people. The First Amendment technically only applies to government action, but when an entity of Facebook’s size and influence decides to ban or censor content, it has nearly as much impact.

Here are some links to more information on Facebook’s latest moves:

  • Facebook has experts: Monika Bickert, Facebook’s VP of Global Policy Management, describes how community standards decisions are made: “We have people in 11 offices around the world, including subject matter experts on issues such as hate speech, child safety and terrorism. Many of us have worked on the issues of expression and safety long before coming to Facebook.” Bickert says as a criminal prosecutor, she worked on everything from child safety to counter terrorism, and other members of the team include a former rape crisis counselor, a human-rights lawyer, and a hate-speech expert.
  • Not enough: Malkia Cyril, a Black Lives Matter activist and executive director for the Center for Media Justice, was part of a group of civil rights organizations that pushed Facebook to make its moderation system less racially biased. She tells The Washington Post that the company’s latest moves don’t go far enough in dealing with white supremacy and hate. “This is just a drop in the bucket,” she says. “What’s needed now is an independent audit to ensure that the basic civil rights of users are protected.”
  • Protected but still sensitive: As Wired magazine points out, Facebook doesn’t have to remove any of the offensive content on its network if it doesn’t want to, thanks to Section 230 of the Communications Decency Act, which protects online services such as Google, Twitter, and Facebook from any legal consequences for the content posted by users. But all of the major platforms have been trying to boost their efforts at removing the worst of the material they host, partially in an effort to stave off potential regulation.
  • The advisory team: As part of Facebook’s attempts to be more transparent, the company allowed a number of journalists to sit in on one of the social network’s weekly community standards meetings on April 17, in which the team of advisers decides what content meets the guidelines. HuffPost wrote that the attendees included people “who specialize in public policy, legal matters, product development and communication,” and said there was little mention of what other large platforms such as Google do when it comes to removing offensive or disturbing content.

Trending: Charlottesville got trolled. Reporters didn’t cover it.

 

Other notable stories:

  • After a number of anti-gay posts were found on the blog that she mothballed last year following similar allegations, MSNBC host Joy Reid claims the posts in question were the result of hackers infiltrating the Internet Archive, which is the only place her blog is still available (the Archive is an ongoing attempt to preserve a copy of as many websites as possible). The Archive, however, says that after an investigation of the claims, it could find no evidence the blog was tampered with.
  • CJR’s Alexandria Neason writes about a group of high school students who were frustrated by the limitations of the Freedom of Information Act, and so decided to write their own bill—known as the Cold Case Records Collection Act—to make it easier to get documents related to Civil War-era crimes from the FBI and other agencies, without having them tied up in red tape or redacted to the point where they’re unusable.
  • Google is rolling out its new subscription tool, which it calls “Subscribe with Google,” and its first launch partner is the McClatchy newspaper chain. The search giant says its new tool allows people to subscribe to newspapers and other online publications with just two clicks, at which point Google highlights results from those publications in search results for those users who sign up. McClatchy plans to implement the tool on all 30 of its local newspaper sites, according to Digiday.
  • In a fundraising email sent to his supporters, Donald Trump said he won’t be attending the annual White House Correspondents’ Dinner because he doesn’t want to be “stuck in a room with a bunch of fake news liberals who hate me.” Instead, the president said he will be holding a rally in Michigan “to spend my evening with my favorite deplorables who love our movement and love America.”
  • In a Rolling Stone magazine feature, Ben Wofford writes about how Sinclair Broadcasting is trying to build what amounts to a national network of hundreds of conservative-leaning, Fox News-style TV stations in small- and medium-sized towns across the country, and how the Trump administration is making it easier for the company to do so. “Everything the FCC has done is custom-built for the business plan of one company, and that’s Sinclair,” one FCC commissioner told the magazine.

ICYMI: Sean Hannity in the spotlight

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.