Sign up for The Media Today, CJRâs daily newsletter.
So-called “fake news” has become a hot-button topic of late, since Donald Trump and his followers have made it a mantra to describe any story they disagree with. After initially dismissing the problem, Facebook has promised to crack down on disinformation in a number of ways, and so has Google.
Experts, however, say what they call “computational propaganda” doesn’t just piggyback on social platforms; it is baked into the DNA and the business model of companies like Facebook, Google, and Twitter. And it’s going to take more than an algorithm tweak to get rid of it.
ICYMI:Â NPR tarnishes its image
Dipayan Ghosh is a computer scientist who, after helping to provide technical advice to the Obama administration, wound up working at Facebook as part of the privacy and public policy team. In 2016, he says, he and others started to notice a deluge of fake news articles and other disinformation, a problem that appeared to be driven by the News Feed algorithm. When Donald Trump was elected, Ghosh says he experienced a crisis of conscience, because he believed politically motivated misinformation had helped Trump win.
“I was sitting on the floor at the Javits Center watching and I was shaken to the core,” Ghosh says. “It was just such a shocker. I couldnât understand it, given [Clinton’s] rise in the popular vote, and I thought there might be something else going on, a proactive campaign going on under the table that was manifesting itself in the election.”
Facebook later admitted before Congress that Russian trolls had promoted fake news and taken advantage of the platform in order to reach more than 125 million people. Special Counsel Robert Mueller has since indicted more than a dozen Russian individuals and several corporations for their role in those events, and Facebook also figures prominently.
As a result of his election-night disillusionment, Ghosh left Facebook and started working with the New America Foundation and Harvard’s Shorenstein Center on Media, Politics, and Public Policy, researching the impact of digital propaganda distributed by social platforms. In January, he published a report called “Digital Deceit: The Technologies Behind Precision Propaganda on the Internet” with Ben Scott, a former innovation adviser at the US State Department.
While most of the recent attention paid to fake news has focused on content from the Internet Research Agency, a Russian “troll factory,” the New America report notes that such content is just the tip of a monstrous digital iceberg. “These platform companies are at the center of a vast ecosystem of services that enable highly targeted political communications that reach millions of people with customized messages that are invisible to the broader public,” Ghosh and Scott write in the report.
In effect, they say, Russian trolls and others take advantage of how social platforms and ad networks are constructed. “Disinformation campaigns are functionally little different from any other advertising campaign, and the leading internet platforms are equipped with world class technology to help advertisers reach and influence audiences,” the report states.
What this means is that platforms like Facebook or Google and trolls running a disinformation or propaganda campaign effectively share the same underlying goals.
ICYMI:Â The Facebook Armageddon
“That fundamental goal is to get the user to stay as long as possible,â Ghosh said in an interview. âTheir motivations are differentâfor platforms, it is to maximize ad space, to collect more information about the individual, and to rake in more dollars; and for the disinformation operator, the motive is the political persuasion of the individual to make a certain decision. But until we change that alignment, we are not going to solve the problem of disinformation on these platforms.”
After Mueller released his indictments, sociologist Zeynep Tufekci noted on Twitter that the indictment “shows [Russia] used social media just like any other advertiser/influencer. They used the platforms as they were designed to be used.”
Indictment shows RU used social media just like any other advertiser/"influencer". They used the platforms as they were designed to be used to make money. I watched this up to the election. Such deliberate, false incitement was not just done by foreigners. https://t.co/2b1v1tMqw9 pic.twitter.com/X9kEijqu92
— zeynep tufekci (@zeynep) February 16, 2018
Facebook and Google, says Ghosh, “have not necessarily encouraged the environment of disinformation but have enabled it through the mass collection of individual data, with as much granularity as possible within legal limits,” something Tufekci has described as “surveillance capitalism.” This kind of structure allows advertisers to target users based on a wide range of interests, but it also allows political parties and much more nefarious groups to do the same, and to finetune their propaganda to have the largest effect possible.
“Itâs a very hard problemâhow to distinguish between disinformation and authentic political speech,” Ghosh says. “Those that are clearly foreign agents can be blocked, but with domestic operators, thereâs an obvious tension between preventing harm and impacting free speech, and I donât think thereâs a clear solution yet. But we are definitely going to see more domestic actors in 2018, and that is frightening.”
Although Facebook has gotten the most attention for the way it was manipulated, it is not alone. Guillaume Chaslot is a former Google engineer who helped develop the algorithms that decide which videos to recommend to YouTube users after they watch a video, and he says the platform has a very real issue with promoting fake news and disinformation.
Chaslot says while studying the functionality of the recommendation algorithm, he noticed that, in many cases, the videos the software promoted were of questionable qualityâfactually inaccurate reports from dodgy websites. He tried to come up with ways to improve the quality of the recommendations, but says his superiors weren’t interested. All they wanted, he says, was for the team to come up with ways of getting people to spend more time on the platform.
“Total watch time was what we went forâthere was very little effort put into quality,” Chaslot says. “All the things I proposed about ways to recommend quality were rejected.” Chaslot, who worked at Google for three years, says he spent so much of what the company calls â20-percent timeâ working on such projects that he was eventually let go for underperformance.
In a blog post in early 2017 entitled “How YouTubeâs A.I. boosts alternative facts,” Chaslot described an experiment he conducted, in which he viewed YouTube videos and then catalogued the automated recommendations (he also made the software he wrote to do so publicly available). In a number of cases, the most-recommended videos involved conspiracy theories about the Earth being flat, the Pope being an agent of evil, Michelle Obama being a man, etc.
“I came to the conclusion that the powerful algorithm I helped build plays an active role in the propagation of false information,” Chaslot wrote. And it does so because YouTube wants to keep people using the service, and salacious or bizarre hoaxes and conspiracy theories keep people engaged.
ICYMI:Â Bad news for Vice, Mashable and BuzzFeed
In addition, as Chaslot describes, “once a conspiracy video is favored by the A.I., it gives an incentive to content creators to upload additional videos corroborating the conspiracy. In turn, those videos increase the retention statistics of the conspiracy. Next, the conspiracy gets recommended further. Eventually, the large amount of videos favoring a conspiracy makes it appear more credible.” In other words, the problem snowballs.
It’s not just fake news or hoaxes that are involved in these organized propaganda campaigns, Tow Center researcher Jonathan Albright notes. Earlier this month, He looked at more than 200,000 tweets that were connected to Russian troll accountsâtweets that were provided to NBC by Twitter insiders before they were deletedâand analyzed them based on the content they linked to. Many distributed real news stories from traditional sources, but in a way that was designed to promote a specific pro-Trump agenda.
When The Guardian wrote about Chaslotâs research, he says representatives from Google and YouTube criticized his methodology and tried to convince the news outlet not to do the story, and promising to publish a blog post refuting his claims. No such post was ever published. Google said it “strongly disagreed” with the researchâbut after Senator Mark Warner raised concerns about YouTube promoting what he called “outrageous, salacious, and often fraudulent content,” Google thanked The Guardian for doing the story.
After The Wall Street Journal reproduced some of Chaslot’s findings, the head of YouTube’s recommendations team responded: “We recognize that this is our responsibility, and we have more to do.” Google has come under fire for similar problems in the past, including an incident in which a fake news story was one of the top recommended links related to the mass shooting in Las Vegas. Google says it is trying to surface “more authoritative” content when people look for hoaxes or conspiracy theories.
“They have made some changes to the search algorithm so it recommends more high-quality content,” says Chaslot, “but if you look at what is recommended, it is still very divisive politically.” In the US, this might not be as much of a problem due to the country’s strong democracy and a culture of respect for the First Amendment, he says, “but in some countries, where you donât have that culture, it could be a much worse problem. There is the same issue in France, where recommendations quickly get into conspiracy theories.”
Platforms like YouTube and Facebook “seem very democratic, because anyone can click the like button and have a vote on the content,” Chaslot says. “But if you know how the system works, if you’re a Russian troll or someone like that, you can figure out how to have a lot more impact, because you know how to organize your content, when to publish, and a lot of other things that increase the probability of your video being seen.”
Google and Facebook often say they don’t want to get into the business of deciding what is true and what isn’t, but Chaslot describes this argument as “total bullshit.” Both platforms could easily create the kinds of tools or processes that are used on a site like Wikipedia, he says, where a group of moderators decide what information to keep.
“There are lots of tools they could try, but they don’t really have any interest in doing it,” Chaslot says. “They have the money to do it, and there are people working there who want to do it, but they don’t bother to try because there is no incentive to do so.”
Lisa-Maria Neudert is part of a team of researchers that works on the Oxford Internet Institute’s computational propaganda project. In a recent report, the Institute looked at how and where fake news stories and related content were shared on Twitter and Facebook, and found that users who shared such posts tended to be Trump supporters or from the conservative end of the political spectrum.
Propaganda isn’t new, says Neudert. What is new is the ease with which it can be created and distributed, and the speed with which such campaigns can be generatedâalong with the fact that they can be targeted to specific individuals or groups, thanks to Facebookâs and Google’s ad technologies.
“This ability to have mass distribution at extremely low cost enables propaganda at an entirely different scale, one weâve never seen before,” she says. “And it uses all of the information that we as users are consciously and unconsciously providing, to produce individualized propaganda.”
In a sense, just as Facebook and Google and Twitter have democratized social communication and media, they have also democratized propaganda. “Social media has shifted the capability of designing propaganda to regular users,” says Neudert. “So it’s no longer something that is created by big companies or governmentsânow the everyday lay person can make a propaganda campaign or a disinformation site or create a bot army.”
For example, critics say Twitter has made it easy for groups or even individuals to create what some call “astro-turfing” campaigns. This kind of behavior is designed to give the impression that there is widespread support for certain views, because the service allows users to create and distribute sponsored posts for entirely fictitious organizations.
A nonprofit group called The Alliance for Securing Democracy, which is funded by the German Marshall Fund, runs a site called Hamilton68 that tracks the activity of Russian troll accounts. The site has shown that such accounts exhibit organized behavior around specific news hashtags, including those that were used prior to the release of the Nunes memo, as well as hashtags used following the Parkland shootings.
The social platforms have been slow to realize just how integral a role they play in this new form of disinformation, Neudert argues.
“I think [Facebook] has had a rude awakening, that the way they structure their platforms has contributed to this problem, but it has been a slow awakening,” she says. “It was only after months and months of pressure that we saw some of the data being shared, and they still havenât shared even a small part of the massive amounts of data they have. If they shared more, I think maybe we could come up with better solutions.”
Some of Facebook’s proposed News Feed changes could actually make the disinformation problem worse, Neudert adds. “The content that is the most misleading or conspiratorial, thatâs whatâs generating the most discussion and the most engagement, and thatâs what the algorithm is designed to respond to,” she says. “So it promotes these kinds of issues even more by exploiting the way human attention works. The environment maximizes for outrage. They say they want more meaningful conversation, but it’s not clear how they are going to define that.”
ICYMI:Â NPR tarnishes its image
Has America ever needed a media defender more than now? Help us by joining CJR today.