Join us
The Media Today

Facebook, abortion, and the future of data privacy

August 12, 2022
Meta Mirrors: International Journalism Festival, Hana joy, 2022

Sign up for The Media Today, CJR’s daily newsletter.

This week, Meta, the parent company of Facebook, was widely criticized for providing police in Norfolk, Nebraska, with private messages between a mother and her seventeen-year-old daughter in which they discussed ending the girl’s pregnancy. According to the Lincoln Journal Star, police in Nebraska received an anonymous tip in April that the girl had suffered a miscarriage and then buried the remains. They charged the girl and her mother in early June with a felony—for “removing, concealing or abandoning a dead human body,” the paper reported—and two misdemeanors, for concealing evidence and making a false report. After those charges were filed, the police officer investigating the case obtained a court order that forced Facebook to produce the private message history between the mother and her daughter, and found evidence that they ended the pregnancy by using abortifacient pills. The mother was charged with two additional felonies: one for performing or attempting an abortion on a pregnancy at more than twenty weeks, which is illegal in Nebraska, and one for performing an abortion without a medical license.

Although the ending of the pregnancy and the court order both took place before Roe v. Wade was overturned, many argue the incident is still a sign of what might happen now that abortion is fully or partly banned in some states and may become illegal in others. The incident “shows in shocking detail how abortion could and will be prosecuted in the United States, and how tech companies will be enlisted by law enforcement to help prosecute their cases,” Vice wrote, in a story detailing the text messages. Meta responded to Vice with a statement reading, “Nothing in the valid warrants we received from local law enforcement in early June, prior to the Supreme Court decision, mentioned abortion. The warrants concerned charges related to a criminal investigation and court documents indicate that police at the time were investigating the case of a stillborn baby who was burned and buried, not a decision to have an abortion.” The statement went on to note that the warrants also included nondisclosure orders, “which prevented us from sharing any information about them,” adding that the orders “have now been lifted.”

At Platformer, his technology-focused newsletter, Casey Newton wrote that the consensus he saw emerging on Twitter and elsewhere following the incident was that Facebook was wrong for turning the private messages over to police. “But of course Facebook complied with law enforcement’s request,” he wrote. “All the company would have known at the time is that police were investigating a stillborn fetus, and on what basis could the company credibly reject that request?” Both Google and Facebook receive tens of thousands of requests every year from government bodies and law enforcement, and it seems unlikely that either platform would resist them all; as Newton writes, “there are costs to continuously flouting the government [and] you can bet that somewhere a Republican attorney general is salivating over a court battle that would put Facebook, abortion, and his name in the headlines.”

Related: What toilets say about the latest Trump news cycle

The larger question is what Facebook (or any of the other major social platforms) will do if there is a similar court order that does explicitly relate to an abortion. As Corynne McSherry, legal director at the Electronic Frontier Foundation, told CNBC in June, “If you create huge databases of information, what you’re also creating is sort of a honeypot for law enforcement.” Andy Stone, a Meta spokesperson, shared the company’s response to Vice’s reporting on Twitter but didn’t elaborate on how Meta might handle future requests in his thread; most of the major platforms have been similarly close-mouthed about the issue. After the Supreme Court overturned Roe v. Wade, CNN asked a range of social networks and platforms—including Meta, Amazon, Apple, Google, and Twitter, among others—how they plan to handle such requests for personal data, and either received no response, a “no comment,” or a simple restatement of company policy.

Newton, in an earlier edition of his newsletter, called abortion “tech’s next big reputational risk. If Google and its peers aren’t going to stop cooperating with law enforcement, they need to start collecting less data.” Albert Fox Cahn, a lawyer who is also executive director of the Surveillance Technology Oversight Project, told Bobby Allyn, a reporter for NPR, that Google “is increasingly the cornerstone of American policing.” Since 2017, Google has complied with between 81 and 83 percent of the more than fifty thousand law enforcement requests it receives every year. In its most recent transparency report, Meta said that it received close to sixty thousand requests for information in the US last year, and provided some information in more than 87 percent of them.

Sign up for CJR’s daily email

The US has no comprehensive national privacy law covering all personal data posted to social networks such as Facebook and Twitter. Rather, as the New York Times explained in a feature last year, privacy laws are a hodgepodge of different rules at the state and federal levels covering different sectors. Amie Stepanovich, an expert in cybersecurity and privacy law, told the Times that such laws “either look at specific types of data, like credit data or health information, or look at specific populations like children, and regulate within those realms.” A number of legislators have proposed that the US adopt a digital data code much like the European Union’s General Data Protection Regulation, but nothing has made it into law yet.

Here’s more on Facebook:

  • My body, my data: For the New York Times, Natasha Singer and Brian X. Chen recently detailed a few pieces of proposed legislation concerning the sharing of private information. The My Body, My Data Act “would prohibit companies and nonprofits from collecting, keeping, using or sharing a person’s reproductive or sexual health details without the person’s written consent,” Singer and Chen write. “Another bill, the Fourth Amendment Is Not for Sale Act, would prevent law enforcement and intelligence agencies from buying a person’s location records and other personal details from data brokers.”
  • My phone, my location: After Roe v. Wade was overturned, Google announced that it would voluntarily begin removing personal location data that might show someone visited sites such as counseling centers, domestic violence shelters, and abortion clinics. Other platforms have made no such commitments. And as Newton points out, “telecom companies like AT&T, Verizon, and T-Mobile collect and even sell sensitive user data, including the locations of the cellular towers that your smartphone pings as you move about the world. And they all (shamefully!) declined to comment about how their data collection will intersect with abortion prosecutions.”
  • My data, the FTC’s rules: On Thursday, the Federal Trade Commission started a process known as “rulemaking” around privacy and data sharing—the first step toward actual regulations on what companies regulated by the FTC can and can’t share. These measures are needed “to stop corporations and other commercial actors from abusing people’s personal data,” the FTC said in a news release. In June, the agency announced that it was considering the rulemaking process in order to “safeguard privacy and create protections against algorithmic decision-making that results in unlawful discrimination.”
  • My chat, my secrets: Meta has talked in the past about offering users the ability to encrypt Facebook Messenger content, so that neither the company nor law enforcement could see what is written there, even with a warrant. (Messages sent via WhatsApp, which is also owned by Meta, are already encrypted by default.) On Thursday, Meta said that it will begin testing end-to-end encryption as the default option for some users of Facebook Messenger on Android and iOS. A Facebook spokesperson told The Guardian the test is limited to a couple of hundred users for now, and that the decision to start rolling it out was “not a response to any law enforcement requests.”

 

Other notable stories:

  • ValueAct Capital Management, known as an active investor that likes to force change at the companies it invests in, has acquired a 7 percent stake in the New York Times Co., Bloomberg reported Thursday. The fund said it believed “the current valuation doesn’t reflect the company’s long-term growth prospects in almost any potential economic environment and that management has several opportunities to offset the macroeconomic headwinds that face the industry.”
  • Twitter posted an update on what it plans to do in order to curb misinformation during the US midterm elections. The company said that its civic integrity policy “covers the most common types of harmful misleading information about elections and civic events, such as: claims about how to participate in a civic process like how to vote, misleading content intended to intimidate or dissuade people from participating in the election, and misleading claims intended to undermine public confidence in an election—including false information about the outcome of the election.”
  • Pakistan’s interior minister, Rana Sanaullah, said he is filing sedition charges against Shahbaz Gill, an aide to Pakistan’s former prime minister Imran Khan, and ARY TV, a local media company, Reuters reported. Sanaullah said comments made by Gill and aired on ARY TV could incite mutiny among the country’s military. Police officials told Reuters that both Gill and Ammad Yousaf, head of news at ARY TV, have been arrested, and Pakistan’s media regulator said in a statement that it had ordered ARY News to be taken off air for airing “false, hateful, and seditious” content.
  • Ben Thompson, a technology and media analyst who writes a newsletter called Stratechery, interviewed Meredith Kopit Levien, the CEO of the New York Times, about the newspaper’s strategy in buying Wordle and The Athletic, and about its new goal of fifteen million subscribers by 2027. Asked what fears or concerns about the paper’s growth keep her up at night, Kopit Levien said, “I’m fifty-one years old, I run a public company, and I’m a mom. Everything keeps me up at night.”
  • For the Nieman Journalism Lab, James Anderson profiles the Kansas City Defender, a nonprofit news site for young Black audiences across the Midwest. “There is a lot about traditional journalism that we are strongly against,” Ryan Sorrell, the site’s only full-time staff member, said. “Learning how to balance our radical rejection of many journalistic norms with our commitment to truth, accuracy, and accountability to our community is something that will be a continual learning process for us.”
  • A new Pew Research Center survey of American teenagers ages thirteen to seventeen finds TikTok has rocketed in popularity since its North American debut several years ago, and is now a top social media platform for teens. Some 67 percent of teens say they use TikTok, with 16 percent saying they use it almost constantly. The share of teens who say they use Facebook, a dominant platform among teens in the Center’s 2014–15 survey, has plummeted from 71 percent then to 32 percent today.
  • Tim Starks writes for the Washington Post about how “ransomware” hackers, who cripple software and then ask to be paid to fix it, can use journalists to advance their goals. “A problem that a lot of reporters have privately wrestled with is, how do you report this which is important, without acting as a PR person for the ransomware groups [sic]?” Allan Liska, director of threat intelligence at cybersecurity firm Recorded Future, told Starks. Separating truth from fiction when it comes to the boasts of ransomware gangs is not an easy task, Starks writes, “as they’re prone to bravado, even as they have rung up high-profile victims and raked in billions.”

ICYMI: The margins of Alex Jones

Has America ever needed a media defender more than now? Help us by joining CJR today.

Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.