Join us
Politics

For Facebook, the political reckoning has begun

October 30, 2017
Image via Pexels

Sign up for The Media Today, CJR’s daily newsletter.

The US presidential electionnow nearly one year agomarked a turning point for US media  and the beginning of a what could be a dark new period for Facebook, which finds itself ensnared in a political morass that could tie its business in knots for years to come.

Following reports that Russian government operatives used Facebook ads in an attempt to subvert the outcome of the election, the social network will appear before the Senate intelligence committee on November 1 (Google and Twitter are also part of the Senate hearing).

ICYMI: Some WSJ staffers are probably unhappy right now

Already, some members of Congress are arguing that Facebook and other digital platforms need to be more heavily regulated when it comes to political advertising. A proposed Senate bill  would require internet companies like Facebook to provide information to the Federal Election Commission about who is buying political ads.

Senators Amy Klobuchar of Minnesota and Mark Warner of Virginia said in a statement the bill would “prevent foreign actors from influencing our elections by ensuring that political ads sold online are covered by the same rules as ads on TV, radio, and satellite.”

Among the questions Congress will have to confront: How much of this is Facebook’s fault? Did it knowingly permit Russian agents to influence American voters, or was it just an unfortunate outcome of how the network functions?

Sign up for CJR’s daily email

“Clearly Facebook doesn’t want to become the arbiter of what’s true and what’s not true,” said Adam Schiff, the ranking Democrat on the House Intelligence Committee. “But they do have a civil responsibility to do the best they can to inform their users of when they’re being manipulated by a foreign actor.”

The challenge for Facebook is that it has never been much for transparency, especially the kind recommended by some critics of its Russian ad behavior. It has turned over the content of the Russian ads it ran to Congress, but some would like the company to go even further and release the information publicly, as well as telling anyone who was the target of those ads exactly how and when they were targeted.

If that becomes a reality, Facebook could find itself forced to be a lot more transparent about the workings of its ad machinery than it has been, and it could be required to be a lot more hands on in monitoring what it runs and where.

Facebook is caught in a classic Catch-22: It could argue that Russian ad buying didn’t influence voters or users, but it already tells advertisers and political parties that the exact opposite is true, and that using Facebook can influence behavior.

Even just disclosing who buys its ads would be a change of direction for Facebook. Broadcasters have had to keep records of who buys political advertising on their networks since 1938, and cable companies have also had to maintain records for some time. In 2012, the Federal Communications Commission started requiring that TV stations provide such information online, and it extended that requirement to radio and satellite-TV providers last year.

The last time the Federal Election Commission ruled on Internet advertising, however, was more than a decade ago. The ruling required that individuals and political committees who bought online political ads had to disclose their spending, but it didn’t say anything about disclosure by the websites or platforms running the ads.

Federal law in the US makes it a crime for foreign entities to spend money in ways that might influence the outcome of an election, including advertising. But the use of digital platforms like Facebook and Google for such purposes remains largely unregulated.

“The principle at stake is that you can’t have an opaque advertising machine built on surveillance capitalism that enables dark advertising and dark money to flow back and forthand can be leveraged by foreign or domestic actors to manipulate public understanding or outright spread misinformationwithout accountability,” says Alex Howard of the Sunlight Foundation, which has been pushing for greater accountability from internet companies.

“They built a system where they’re making billions and billions of dollars at internet scale, but they didn’t build in systems for transparency and accountability from the beginning,” says Howard. “They assumed great power without taking great responsibility.”

This controversy and the political machinations that stem from it threaten to complicate Facebook’s future. The Russian ads in question are not the result of some kind of misunderstanding by regulators or critics of how the social network operates, nor are they the result of a software bug that produced unintended consequences. Instead, the ads are an example of the company’s social machinery working exactly as it was meant to.

The Russian ads in question are not the result of some kind of misunderstanding by regulators or critics of how the social network operates, nor are they the result of a software bug that produced unintended consequences. Instead, the ads are an example of the company’s social machinery working exactly as it was meant to.

That means Facebook may have to change the way it handles advertising on the network, or at the very least will have to pay much closer attention to who is buying ads and whyas well as disclosing some or all of that information to regulators and the public.

As with Google, the fact that Facebook’s ad engine is almost completely automated isn’t just a nice feature, it’s a crucial part of how the machine functions. Without automation there would be no way for it to achieve the kind of scale necessary to reach more than two billion people a day.

The downside of this kind of automation extends beyond just potential Russian involvementin several cases, Facebook has accepted advertising that was directed at offensive categories such as “Jew haters.” The social network has apologized, and said it will add more human oversight to prevent such occurrences in the future.

In a statement earlier this month, the company said an estimated 10 million people in the US saw the ads in question, and that about half the impressions or ad views came before the election. (Facebook also pointed out that about 25 percent of the ads “were never shown to anyone” because they didn’t meet the platform’s test for relevance.)

Jonathan Albright of Columbia’s Tow Center, however, found that just six of the 470 fake account pages reached as many as 340 million people, drawing more than 19 million likes, shares, and comments. (In a classic Facebook move, the company has since removed the data Albright used, calling it a “bug” that it was publicly available in the first place.)

The company said it reviews millions of ads each week, and about eight million people report ads for one reason or another every day. But it said it has increased the number of employees reviewing ads, and that it is committed to “making advertising more transparent” and “increasing requirements for authenticity.” It’s not clear how the company will determine authenticity.

ICYMI: CNN airs problematic gun photo

Facebook has also said it will add disclaimers to any future political ads. But it’s not clear whether this will apply only to traditional banner ads, or whether it will cover promoted posts and other forms of social advertising, which make up a significant chunk of Facebook’s business.

The fact almost anything on the social network can function as an ad, whether it’s a photo or a video or a simple text post, is a crucial element of Facebook’s DNA, and of its business model. Restricting any disclosure or targeting requirements to just traditional ads would mean ignoring a large part of the problem, but expanding the definition could mean requiring Facebook to factor in the political repercussions of literally every piece of content on the network.

Facebook is caught in a classic Catch-22: It could argue that Russian ad buying didn’t influence voters or users, but it already tells advertisers and political parties that the exact opposite is true, and that using Facebook can influence behavior.

Facebook has also confessed that even if it tightens up its controls, there will always be room for people to use the network for nefarious purposes or to spread disinformation.

“Even when we have taken all steps to control abuse, there will be political and social content that will appear on our platform that people will find objectionable,” VP of communications Elliot Schrage said in a recent Facebook post. “We permit these messages because we share the values of free speechthat when the right to speech is censored or restricted for any of us, it diminishes the rights to speech for all of us.”

An often-repeated criticism of Facebook is that the company is far too secretive about its operations, and fights every attempt to get it to release any information about how the algorithm functions or the outcome of its behavior.

The Senate committee’s interest has forced Facebook to break down some of these walls, but if the bill proposed by Klobuchar and Warner goes ahead, the social network could be forced to do much moreand these requirements could cover not just traditional ads but “promoted posts” and other forms of advertising. And in the process, the workings of Facebook’s ad machine could be dragged out of the darkness and into the light. Its critics may cheer if that happens, but Facebook itself is unlikely to be quite as enthusiastic.

 

Google and Twitter also face scrutiny

While most of the attention has focused on Facebook when it comes to running Russia-linked ads designed to influence the 2016 election, Google and Twitter are also under the microscope for doing something similar, which reinforces the point that this isn’t a problem related specifically to Facebook, but one connected to something much broader about how internet platforms behave, and the way advertising and media work now.

In early September, when Facebook was starting to draw attention for links to Russian election-meddling, Google said it hadn’t found any sign of similar untoward advertising campaigns on its platforms. “We’re always monitoring for abuse or violations of our policies and we’ve seen no evidence this type of ad campaign was run on our platforms,” the company said in a statement to Reuters.

A little over a month later, the company was telling a different story. Although it hasn’t confirmed the reports publicly, sources told The Washington Post, The New York Times, and Reuters that Google had come across signs of advertising buys that appeared to be part of a Russia-backed misinformation campaign involving the electionalthough the ads don’t appear to be from the Internet Research Agency, the “troll factory” behind the Facebook campaign.

The campaign on Google represented less than $100,000 worth of advertising. A total of about $5,000 worth of search ads and display ads were bought by accounts believed to be connected to the Russian government, according to The New York Times, while a further $53,000 or so were bought by accounts with Russian Internet addresses or with Russian currency. It’s not clear whether these were related to Russian government entities.

Google has a policy that prevents targeting of ads based on race and religion.

Google doesn’t offer advertisers the same kind of granular targeting that Facebook does, in which individuals can be selected to receive a specific message based on their political views and other information. The company also has a policy that prevents targeting of ads based on race and religion. Ironically, Google found the Russian-linked ad buying by using data from Twitter, a source told The Washington Post.

As with Facebook, the fact that Russian government entities and other agents were able to buy and distribute ads and other information on Google and Twitter is not a bug or a flaw in the system but an example of it working exactly as intended. All three companies have built more or less automated advertising networks that allow companies and individuals to buy ads with virtually zero human input.

Twitter has been a bit more forthcoming with information than Google, but its efforts haven’t been universally well-received. The company told members of the Senate intelligence committee in a closed-door hearing in September that it had found and shut down about 200 accounts associated with the Internet Research Agency, and it said Russian news site RTwhich many believe is tied to the governmentspent about $275,000 on Twitter ads in 2016.

One of the top-ranking Democrats on the Senate intelligence committee wasn’t impressed by Twitter’s efforts, however. Mark Warner said that the company’s presentation was “inadequate” and “deeply disappointing,” primarily because Twitter only searched its databases for information related to the accounts that Facebook had already identified.

Not only that, but some of the data that Twitter relied on has since been deleted as a result of the company’s privacy policies around retention of information, according to security analysts. That could complicate the Senate and House investigations into how these platforms were used by Russian agents to try and influence the election.

Twitter said last week that it has banned Russia Today and Sputnik from buying Twitter ads, since both news outlets have been linked to the Kremlin. But even that step doesn’t go far enough, some believe, because it doesn’t address the use of “bots” or automated accounts which try to influence users through regular Twitter behavior rather than advertising.

According to research from the Alliance for Securing Democracy, a public-policy group in Washington, more than 600 Twitter accountsrun by both human users and suspected bots or automated accountshave been linked to what appear to be Russian attempts to influence voter behavior around the election. Other research has also shown signs of a “bot army” that was mobilized by foreign agents during the election.

In a blog post in June, a senior Twitter executive said that the company believes that the network’s “open and real-time nature is a powerful antidote to the spreading of all types of false information.” This is important, he said, because “we cannot distinguish whether every single Tweet from every person is truthful or not. We, as a company, should not be the arbiter of truth.” Facebook CEO Mark Zuckerberg has made similar comments when pressed about the company’s responsibility for stopping “fake news” and other forms of misinformation.

ICYMI: Prominent journalists accused of sexual misconduct

Has America ever needed a media defender more than now? Help us by joining CJR today.

Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.