Join us
The Media Today

The platforms and the challenges of the next election

August 18, 2022
Seen on the screen of a device in Sausalito, Calif., Facebook CEO Mark Zuckerberg announces their new name, Meta, during a virtual event on Thursday, Oct. 28, 2021. Zuckerberg talked up his latest passion -- creating a virtual reality "metaverse" for business, entertainment and meaningful social interactions. (AP Photo/Eric Risberg)

Sign up for The Media Today, CJR’s daily newsletter.

“It’s mid-August of an election year in America, which can only mean one thing,” wrote Sarah Roach, Nat Rubio-Licht, and Issie Lapowsky in Protocol’s “Source Code” newsletter yesterday. “It’s time for every social media company to announce how it plans to combat whatever fresh hell November has in store.” In recent weeks, the major social platforms have all released new statements about how they plan to handle any misinformation and abuse that might arrive between now and the November 8 midterms. Based on what Meta (the parent company of Facebook), Twitter, Google, and TikTok have said about those plans so far, the order of the day appears to be “stay the course.” None of the platforms appear to be making any dramatic departures from the way they handled the last election and its aftermath—and, depending on your perspective, that could be either a good thing or a bad thing.

Kurt Wagner and Alex Barinka wrote for Bloomberg that “after years of revising and updating its election strategy, Meta is pulling out a familiar playbook for the US midterms, sticking with many of the same tactics it used during the 2020 general election to handle political ads and fight misinformation. That largely means focusing on scrubbing misinformation about voting logistics and restricting any new political ads in the week prior to Election Day.” Nick Clegg, the head of global affairs at Meta and a former deputy prime minister of the United Kingdom, wrote on the company’s blog on Tuesday that its approach to the 2022 US midterms “is consistent with the policies and safeguards we had in place during the 2020 US presidential election,” and that Facebook has “hundreds of people” working to prevent misinformation and abuse. Clegg also told Politico that the company plans to stick to its plan to review Donald Trump’s ban in January 2023, even if Trump should declare his intention to run for president before then.

Unsurprisingly, not everyone is happy about Facebook’s decision to go forward with the same policies and practices it used in 2020. After Clegg posted the company’s plans on Twitter, NYU’s Center for Social Media and Politics responded, “By most accounts, Facebook’s 2020 election misinformation policy worked fairly well—until they disbanded the election integrity unit and slowed enforcement after Election Day. Let’s hope they don’t make the same mistake again.” Kayla Gogarty, deputy research director at Media Matters for America, said that she is “always skeptical of Facebook’s ad restrictions,” noting that, after 2020, “it banned ads about social issues, elections, and politics, but let the Daily Wire earn millions of impressions on ads that seemingly fit that criteria.”

ICYMI: Democracy and the Liz Cheney narrative

Facebook’s rules around political advertising are among the most controversial aspects of its policies related to the election. Few question the company’s decision to remove posts that mislead people about when or where to vote, or that call for election violence. Blocking political ads in the week prior to the election also seems fairly uncontroversial—although it has caused problems in the past, including when the Daily Wire, a right-wing site, was allowed to run ads despite the ban. But there are those who believe Facebook shouldn’t allow political advertising at all, and others who question why the company chooses not to fact-check political ads. Facebook says this policy is “grounded in Facebook’s fundamental belief in free expression,” but Yael Eisenstat, the former head of election integrity for Facebook, told PBS NewsHour that the company opted not to fact-check because “they needed to preserve their power with the incumbent, and so they put that priority over what many people in the company believed would actually protect our democracy.”

Unlike Facebook, neither Twitter nor TikTok allows political advertising, although for different reasons. TikTok, the Chinese-owned video app that has become one of the most popular social tools in the world, says that it bans political ads because its users love “the app’s lighthearted and irreverent feeling,” and political advertising doesn’t fit that experience. Last year, however, the Washington Post noted that partisan influencers were “evading TikTok’s political ad ban” and “flying under the radar on the social network, exposing a critical blind spot in the company’s rules.” A report from the Mozilla Foundation described more than a dozen examples of influencers on the platform with financial ties to political organizations who posted without disclosing that their messages were sponsored. TikTok has said it plans to crack down on that sort of thing, along with doing more fact-checking.

Sign up for CJR’s daily email

The New York Times recently reported that TikTok’s election-misinformation problems aren’t limited to the US. “In Germany, TikTok accounts impersonated prominent political figures during the country’s last national election,” Tiffany Hsu wrote for the Times. “In Colombia, misleading TikTok posts falsely attributed a quotation from one candidate to a cartoon villain, [and] in the Philippines, TikTok videos amplified sugarcoated myths about the country’s former dictator. Now, similar problems have arrived in the United States.” The Times said TikTok is “shaping up to be a primary incubator of baseless and misleading information, in many ways as problematic as Facebook and Twitter,” because “the same qualities that allow TikTok to fuel viral dance fads…can also make inaccurate claims difficult to contain.”

Twitter, meanwhile, banned political advertising of any kind in 2019. The company says on its site that it prohibits the promotion of political content “based on our belief that political message reach should be earned, not bought.” The plan for the upcoming elections, Twitter wrote on its company blog last week, is to label misinformation and then show users a prompt when they attempt to like or share those tweets. Unfortunately for Twitter, some research shows that its labels do very little to stop users from sharing the tweets in question—at least where Trump is concerned—and in some cases made the information spread faster than it might have otherwise. In cases where there is potential for harm associated with a false claim, however, the company says such tweets “may not be liked or shared to prevent the spread of the misleading information.”

Here’s more on the platforms:

  • Engineering, pt. 1: Google announced its own plan to handle election misinformation, which, not surprisingly, involves algorithms. “By using our latest AI model, Multitask Unified Model (MUM), our systems can now understand the notion of consensus, which is when multiple high-quality sources on the web all agree on the same fact,” Pandu Nayak, vice president of search, wrote on the company’s blog. He went on to say that Google is also working on filling what researchers call “data voids” when there isn’t enough reliable information about a breaking news topic. The company plans to expand its use of content advisories in situations when a topic is evolving rapidly.
  • Engineering, pt. 2: When asked if he thought Trump was more or less of a risk to public safety now than when his account was banned, Clegg told Politico: “Look, I work for an engineering company. We’re an engineering company. We’re not going to start providing a running commentary on the politics of the United States.” Of Trump’s ban, he said the company “will look at the situation as best as we can understand it” but that “getting Silicon Valley companies to provide a running commentary on political developments in the meantime is not really going to…help illuminate that decision when we need to make it.”
  • Everybody hurts: Niam Yaraghi, a fellow at the Brookings Institution, argued that Twitter’s ban on political ads “hurts our democracy.” It is difficult to “untangle electioneering activities from issue-based advocacy. Healthcare, education, business, entertainment, and religion are all intertwined with politics,” Yaraghi wrote. The inherent difficulty in defining electoral advocacy and separating it from issue advocacy “makes it almost impossible to implement such a ban effectively,” he wrote, and even if social media companies could successfully define these terms, “the benefits of such a policy are unclear.”

 

Other notable stories:

  • After traveling home to Saudi Arabia, Salma al-Shehab, a student at Leeds University in the United Kingdom, was sentenced to thirty-four years in prison for having a Twitter account and for following and retweeting dissidents and activists, The Guardian reported. The case “marks the latest example of how the crown prince Mohammed bin Salman has targeted Twitter users in his campaign of repression,” the newspaper wrote. Shehab, thirty-four, a mother of two young children, was initially sentenced to three years in prison for allegedly using Twitter to “cause public unrest and destabilize civil and national security,” but the court handed down the longer sentence because she also allegedly “assisted those who seek to cause public unrest and destabilize civil and national security.”
  • The Financial Times reported that young adults in the UK spend more time on TikTok than watching broadcast television, according to a new report from Ofcom, the British media regulator. “In its annual survey of consumption trends, the media regulator found that those aged 16 to 24 spent an average of 53 minutes a day viewing traditional broadcast TV, just a third of the level a decade ago,” the Times wrote. “By contrast, people over the age of 65 spent seven times as long in front of channels such as BBC One or ITV, viewing almost six hours’ worth of broadcast TV a day—a figure that has risen since 2011.”
  • Davey Alba and Jack Gillum write for Bloomberg that Google Maps routinely misleads people looking for abortion providers. “When users type the words ‘abortion clinic’ into the Maps search bar, crisis pregnancy centers account for about a quarter of the top 10 search results on average across all 50 US states, plus Washington D.C.,” Alba and Gillum reported, according to data Bloomberg collected in July. “In 13 states, including Arkansas, South Carolina and Idaho where the procedure is newly limited, five or more of the top 10 results were for CPCs, not abortion clinics.”
  • Google has agreed to pay $60 million in penalties as a result of a battle with Australia’s competition regulator over allegations that Google misled users on how it collected their personal location data, The Guardian reported. “In April last year, the federal court found Google breached consumer laws by misleading some local users into thinking the company was not collecting personal data about their location via mobile devices with Android operating systems,” the paper reported.
  • Penn Entertainment, a casino operator, is acquiring the remaining shares of Barstool Sports that it doesn’t already own, giving it control of the sports-focused social media service, Bloomberg reported. “In a filing Wednesday, Penn said it exercised call rights and would complete the purchase of the remaining Barstool shares by February 2023,” the news service wrote. In 2020, Penn agreed to buy a 36 percent stake in Barstool for $161.2 million; under the terms of the latest deal, which is detailed in Penn’s second-quarter results, the company is to buy the rest of Barstool for $387 million.
  • Nieman Reports writes about four independent digital journalism outlets that represent “the vanguard of next-generation Turkish journalism.” They include Kapsül, a newsletter that started in early 2020 and now has 54,000 subscribers; Medyascope, which was founded in 2015 by Ruşen Çakir, a journalist who worked for some of Turkey’s biggest outlets; Sözcü, which has one of the highest circulations in the country, according to Reuters; and Podfresh, which hosts more than 350 independent podcasts, accounting for almost 20 percent of the shows produced regularly in Turkey.
  • Swizz Beatz and Timbaland have sued Triller, a video-sharing app, alleging the platform owes them more than $28 million after acquiring Verzuz, a livestreaming music series started by the two producers, Taylor Lorenz reported for the Washington Post. “Triller acquired Verzuz, a webcast series pitting musical acts against one another, in January 2021 for an undisclosed sum,” Lorenz wrote. The lawsuit alleges that Triller began missing payments in January of this year. Lorenz previously reported on allegations of “erratic” payments from Triller to Black content creators.
  • In media-jobs news, the Financial Times announced that it has made three new appointments to expand its financial coverage in the US. Jennifer Hughes, who is currently Asia Finance and Markets editor at Reuters’s Breakingviews in Hong Kong, becomes the FT’s new US markets editor; Eric Platt, currently the FT’s US markets editor based in New York, is the new senior corporate finance correspondent; and Tabby Kinder, currently the paper’s Asia financial correspondent, based in Hong Kong, has been appointed West Coast financial editor.

ICYMI: A tale of two bleak press-freedom anniversaries

Has America ever needed a media defender more than now? Help us by joining CJR today.

Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.