Join us
The Media Today

Algorithm accountability is easier said than done

December 3, 2021
UNITED STATES - DECEMBER 1: Frances Haugen, a former Facebook employee, arrives for the House Energy and Commerce Subcommittee on Communications and Technology hearing titled “Holding Big Tech Accountable: Targeted Reforms to Tech's Legal Immunity,” in Rayburn Building on Wednesday, December 1, 2021. (Photo By Tom Williams/CQ Roll Call)

Sign up for The Media Today, CJR’s daily newsletter.

Over the past several years, Congress has held a seemingly never-ending series of hearings concerning “Big Tech,” the handful of companies that control much of our online behavior: Facebook, Twitter, and Google. Congressional committees have looked into whether the platforms allowed foreign agents to influence the 2016 election, whether their algorithms suppress certain kinds of speech, and whether they harm young women; in many cases, the hearings have also been a forum for grandstanding. This week saw the latest in the series, a hearing by the House Energy and Commerce Committee, called “Holding Big Tech Accountable: Targeted Reforms to Tech’s Legal Immunity.” The subject of the hearing was a piece of legislation that has been an ace in the hole for the platforms in all of their other congressional appearances: Section 230 of the Communications Decency Act.

Section 230 protects electronic service providers from liability for the content posted by their users—even if that content is harmful, hateful, or misleading. For the past few years, pressure has built within Washington for lawmakers to somehow find a way around it. That pressure came to a head in 2020 when former president Donald Trump, who had expressed concerns over alleged censorship of conservative speech on social media, signed an executive order asking the Federal Trade Commission to do something about Section 230 (even though the agency has no legal right to do so). Before he became president, Joe Biden said that he believed Section 230 “needs to be revoked, immediately”; since he took office, legislators have put forward a number of proposals in an attempt to do that. A recent proposal from Democratic Senator Amy Klobuchar would carve out an exception for medical misinformation during a health crisis, making the platforms liable for distributing anything the government defines as untrue.

Republican members of Congress have introduced their own proposals for a host of other Section 230 carve-outs, aimed at forcing platforms to keep certain kinds of content (mostly conservative speech) while forcing them to remove others, such as cyber-bullying. This week’s hearing was held to consider a number of other pieces of legislation aimed at weakening or even dismantling Section 230. They include one supported by four of the top Democratic members of the Energy and Commerce Committee, called “The Protecting Americans From Dangerous Algorithms Act,” which would open the platforms to lawsuits for making personalized recommendations to users that cause them harm. At least some of the hearing was taken up—as many previous ones have been—with statements from Republican members about how platforms like Facebook and Twitter allegedly censor conservative content, which studies have shown is not true.

Frances Haugen, the former Facebook staffer turned whistleblower who leaked thousands of documents to the Wall Street Journal and then to a consortium of other media outlets, has helped fuel the desire to hold the platforms to account. During her testimony this week, she took time to remind the committee that well-meaning efforts to do so can have unintended side effects. The 2018 law known as FOSTA-SESTA, for example, was designed to prevent sex trafficking, but Haugen noted that it also made things more difficult for sex workers and other vulnerable people. “I encourage you to talk to human rights advocates who can help provide context on how the last reform of 230 had dramatic impacts on the safety of some of the most vulnerable people in our society but has been rarely used for its original purpose,” she said, according to Mashable.

This message was echoed by others who testified at the hearing (the first of two; the second is scheduled for next week). “It’s irresponsible and unconscionable for lawmakers to rush toward further changes to Section 230 while actively ignoring human rights experts and the communities that were most impacted by the last major change to Section 230,” Evan Greer, director of Fight for the Future, told the committee. “The last misguided legislation that changed Section 230 got people killed. Congress needs to do its due diligence and legislate responsibly. Lives are at stake.” According to a recent review of the legislation by human-rights experts, FOSTA-SESTA has had “a chilling effect on free speech, has created dangerous working conditions for sex-workers, and has made it more difficult for police to find trafficked individuals.”

A number of critics of the more recent legislative attempts to do an end-run around Section 230 have also pointed to the difficulty of targeting the things that algorithms do, since there are a multitude of algorithms that are used by different platforms to do different things—to recommend content to users, for instance, but also to sort it and filter it—and defining which ones are bad and why is not easy. “I agree in principle that there should be liability, but I don’t think we’ve found the right set of terms to describe the processes we’re concerned about,” Jonathan Stray, a visiting scholar at the Berkeley Center for Human-Compatible AI, told the House subcommittee hearing. “What’s amplification, what’s enhancement, what’s personalization, what’s recommendation?” If scientists and tech scholars have difficulty answering these questions, it seems unlikely that Congress will.

Sign up for CJR’s daily email

Here’s more on Section 230 and the platforms:

  • Who’s on First: Using CJR’s Galley platform, I held a series of discussions about Section 230 earlier this year with a group of experts in law and technology, including Jeff Kosseff, a law professor at the Naval Academy and author of a history of Section 230; Mike Masnick, who runs Techdirt and co-founded the Copia Institute, a technology think tank; Mary Anne Franks, a law professor at the University of Miami; and Eric Goldman, a law professor at Santa Clara University. “To the extent that people want to force social media companies to leave certain speech up, or to boost certain content,” said Franks, “their problem isn’t Section 230, it’s the First Amendment.”
  • Do no harm: When it comes to Section 230 reform, “first, policymakers should do no harm,” Cameron Kerry, a former Obama administration official, said in his remarks to an April workshop held by the National Academies of Science, Engineering, and Medicine’s Committee on Science, Technology, and Law. “Ill-conceived changes to Section 230 actually could break the internet,” he said. “Many proposed solutions—such as mandating content moderation, imposing common carrier obligations, or outright repeal—present potential unintended consequences, including diminishing freedom of expression.”
  • The road to nuance: Daphne Keller, a former associate counsel at Google who directs the Program on Platform Regulation at Stanford’s Cyber Policy Center, wrote in a paper published by the Knight First Amendment Institute that the desire to regulate recommendation or amplification algorithms is understandable, but a long way off. “Some versions of amplification law would be flatly unconstitutional in the US,” she writes. “Others might have a narrow path to constitutionality, but would require a lot more work than anyone has put into them so far. Perhaps after doing that work, we will arrive at wise and nuanced laws regulating amplification. For now, I am largely a skeptic.”

 

Other notable stories:

  • The St. Louis Post-Dispatch reports that, prior to blaming one of its reporters for “hacking” its website by decoding some HTML, the Department of Elementary and Secondary Education “was preparing to thank the newspaper for discovering a significant data vulnerability, according to records obtained by the Post-Dispatch through a Sunshine Law request.” A press release expressing gratitude towards the newspaper was prepared, the paper reported, but the next day, “the Office of Administration issued a news release calling the Post-Dispatch journalist a ‘hacker.’” State police later launched a criminal investigation into the incident, which the paper said is still ongoing.
  • On Thursday, two Georgia election workers who were targeted by a right-wing campaign claiming they manipulated ballots, filed a defamation lawsuit against The Gateway Pundit, a right-wing news site, the New York Times reports. “The suit was filed by Ruby Freeman and her daughter, Shaye Moss, both of whom processed ballots in Atlanta during the 2020 election for the Fulton County elections board,” the Times said. “It follows a series of defamation claims filed by elections equipment operators against conservative television operators such as Fox News, Newsmax and One America News.”
  • The union representing 61 members of BuzzFeed’s newsroom held a virtual walkout for 24 hours on Thursday, as the company prepares to go public by merging with a special purpose acquisition vehicle. “There is no future of BuzzFeed without the workers, no product for them to take public,” said Addy Baird, chair of BuzzFeed News’ union, according to a report in New York magazine. Meanwhile, BuzzFeed’s merger has not met with as much interest as the company hoped, and is expected to generate less revenue than originally planned, according to Alex Weprin of the Hollywood Reporter.
  • Meta, formerly known as Facebook, published a year-end Adversarial Threat report in which the company describes how it found and removed six networks of accounts for what it calls “coordinated inauthentic behavior,” including operations in China, Palestine, Poland, and Belarus. The company also said that it is expanding a beta project in which it shares data from its CrowdTangle data-tracing unit with security researchers, which it hopes will make it easier to find similar behavior. Facebook has been criticized by researchers and journalists for not sharing enough data with external sources.
  • The New York Post says staffers of Meredith’s Shape magazine were laid off, but then asked them to continue working as freelancers on a new magazine called Sweet July, run by Food Network star Ayesha Curry. “Employees of Shape magazine—for which Meredith shuttered print operations last month—had been pulling double duty working overtime on Curry’s lifestyle mag, but without receiving additional pay,” the paper says. A source told the Post: “They were laid off without notice, but it seemed the company forgot the team that had been fired was also working on Curry’s magazine.”
  • In a report for the Tow Center at Columbia’s School of Journalism, Jakob Nelson looked at the impact of social-media policies in newsrooms. “Journalists have learned that engaging “with their audiences via social media platforms carries personal and professional risks—namely accusations of political bias that can lead to termination from their jobs, as well as trolling, doxing, and threats of physical violence,” Nelson wrote. “This report examines the extent to which newsroom managers help—or hinder—their journalists when it comes to navigating the risks and challenges of audience engagement via social media platforms.”
  • Meghan Markle won the latest round of her long-running lawsuit against Associated Newspapers Limited, the publishers of the Daily Mail, the Mail on Sunday, and Mail Online in Britain, when a judge dismissed an appeal by the company, the Daily Beast reported. “Meghan was suing ANL for invasion of privacy and violating her copyright after ANL published extensive sections of a ‘deeply personal’ hand-written letter she sent to her estranged father shortly after her wedding to Harry,” the Daily Beast said, adding that a judge earlier this year granted Markle a summary judgment, which meant “he had unilaterally decided there was absolutely no prospect of ANL succeeding.”
  • Canadian media companies expect Google and Facebook to start paying them as much as $100 million a year once the government passes legislation requiring the platforms to strike deals with publishers, the Press Gazette reported. The news bargaining code is expected to be similar to one that Australia passed, which forced the technology companies to license content or face compulsory arbitration and financial penalties. “Senior industry sources spoken to by Press Gazette expect the legislation to come into force by the summer or early autumn,” the magazine reported.

 

Has America ever needed a media defender more than now? Help us by joining CJR today.

Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.