Sign up for The Media Today, CJRâs daily newsletter.
For many people, YouTube is a place to kill time by watching sailing videos, or to pick up tips on how to train their dog, or change a car headlight. But the Google-owned video service also has a darker side, according to a number of news articles, including one from the New York Times last year. Some users, these stories say, start out looking at innocuous videos, but get pushed in the direction of radical, inflammatory or even outright fake content. Those pushes come from YouTube’s recommendation algorithm, which some argue has turned the service into a “radicalization engine.” Does the network and the software that powers its video suggestions actually turn users into consumers of far-right conspiracy theories and other radical content, and if so what should be done about it?
Those are some of the questions we at CJR wanted to address, so we used our Galley discussion platform to convene a virtual panel of experts in what some call “automated propaganda,” including Dipayan Ghosh of Harvard’s Shorenstein Center, New York Times columnist Kevin Rooseâwho wrote last year’s Times piece on YouTube radicalizationâas well as Brazilian researcher Virgilio Almeida, Aviv Ovadya of the Thoughtful Technology Project, former YouTube programmer Guillaume Chaslot, and Harvard misinformation researcher Joan Donovan. One trigger for this discussion was a research paper published recently that not only said YouTube isn’t a radicalization engine, but argued that its software actually does the opposite, by suggesting videos that push users toward mainstream content. As part of our virtual panel, we spoke to a co-author of that paper, Mark Ledwich.
In Twitter posts and on Medium, Ledwich took direct aim at the New York Times and Roose for perpetuating what he called “the myth of YouTube algorithmic radicalization.” In reality, he said, this theory showed that “old media titans, presenting themselves as non-partisan and authoritative, are in fact trapped in echo chambers of their own creation, and are no more incentivized to report the truth than YouTube grifters.” One of the main criticisms of the paperâwhich came from others in the field such as Arvind Narayanan of Princetonâwas that the research was based on anonymized data, meaning none of the recommendations were personalized, the way they were in the New York Times piece (which used personal account data provided by the subject of the story). In his Galley interview, Ledwich pointed out that much of the research that others have used to support the radicalization theory is also based on anonymized data, in part because personalized data is so difficult to come by.
ICYMI:Â Does the media capital of the world have news deserts?
Although some argued that Ledwich’s study may not have found radicalization because of changes that have been made to the YouTube algorithm (as a result of critical coverage in the Times and elsewhere), another recent study by Virgilio Almeida and his team found significant evidence of radicalization. That research looked at more than 72 million comments across hundreds of channels, and found that “users consistently migrate from milder to more extreme content,” Almeida said in his Galley interview. Shorenstein Center fellow Dipayan Ghosh, who directs the Platform Accountability Project at Harvard’s Kennedy School and was a technology adviser in the Obama White House, told CJR that “given the ways YouTube appears to be implicating the democratic process, we need to renegotiate the terms of internet regulation so that we can redistribute power from corporations to citizens.”
YouTube has said that its efforts to reduce the radicalization effects of the recommendation algorithm have resulted in users interacting with 70 percent less “borderline content,” as the service describes content that is problematic but doesn’t overtly break its rules (although it didn’t give exact numbers, or define what “borderline content” consists of). As Ghosh, Almeida, and other researchers have pointed out, the biggest problem when it comes to determining whether YouTube pushes users toward more radical content is a lack of useful data. As is the case with Facebook and similar concerns about its content, the only place to get the data required to answer such questions is inside the company itself, and despite promises to the contrary, very little of that data gets shared with outsiders, even those who are trying to help us understand how these new services are affecting us.
Here’s more on YouTube, radicalization, and “automated propaganda”
- With or without: Stanford PhD student Becca Lewis, an affiliate at Data & Society, argues that YouTube could remove its recommendation algorithm entirely and still be one of the largest sources of far-right propaganda and radicalization online. “The actual dynamics of propaganda on the platform are messier and more complicated than a single headline or technological feature can convey,” she says, and they show “how the problems are baked deeply into YouTubeâs entire platform and business model,” which are based on complex human behavior that revolves around celebrity culture and community.
- Radicalized Brazil: Virgilio Almeida’s research into the radicalization effects of YouTube’s recommendation algorithm formed part of the background for a New York Times piece on some of the cultural changes that led up to the election of Brazilian president Jair Bolsonaro, as well as research done by Harvard’s Berkman Klein Center. YouTube challenged the researchersâ methodology, and maintained that its internal data contradicted their findings, the Times said, “but the company declined the Timesâ requests for that data, as well as requests for certain statistics that would reveal whether or not the researchersâ findings were accurate.”
- Algorithmic propaganda: Guillaume Chaslot was a programmer with YouTube who worked on the recommendation algorithm, and told CJR that he raised concerns about radicalization and disinformation at the time, but was told that the primary focus was to increase engagement time on the platform. âTotal watch time was what we went forâthere was very little effort put into quality,â Chaslot said. âAll the things I proposed about ways to recommend quality were rejected.â He now runs a project called AlgoTransparency.org, which tracks YouTube’s recommendations, and he is also an advisor at the Center for Humane Technology.
Other notable stories:
- Most of media Twitter was obsessed on Wednesday with a mystery involving Facebook and sponsored content. An article appeared on the Teen Vogue site in the morning entitled “How Facebook Is Helping Ensure the Integrity of the 2020 Election,” which consisted of interviews with Facebook executives and a largely uncritical assessment of how the company was working to stop people from mucking around with the election. After comments about how uncritical it was, an editor’s note was appended that described it as sponsored content — but then just as quickly, the note disappeared. Then the entire article disappeared. Facebook at first denied it was sponsored content, then later admitted that it was. Teen Vogue apologized.
- Leo Schwartz writes for CJR about JesĂșs CantĂș, a former journalist who acts as information chief for Mexicoâs anti-press president, AndrĂ©s Manuel LĂłpez Obrador. “Mexico is the most dangerous country in the world to be a journalist, with more than 150 journalists killed since 2000 and twelve since AMLO took office,” he writes. “Nevertheless, CantĂș will argue, AMLO is an improvement over Enrique Peña Nieto, his predecessor, who held only two press conferences during his tenure and ran a surveillance program targeting reporters.”
- Twitter suspended an account impersonating a New York Post reporter after it sent out a series of fake stories pumping out pro-Iranian regime propaganda and attacking adversaries of the Islamic Republic, according to a report by The Daily Beast. The account was also linked through retweets and shared articles to another account impersonating a reporter from Israel, and that account was also taken down after sharing pro-Iranian regime propaganda.
- Kuwaitâs state news agency, KUNA, said that its Twitter account was hacked and used to spread false information about US troops withdrawing from the country, according to a report from Reuters. The now-deleted report said that Kuwaitâs defense minister had received a letter from the US saying American troops would leave a Kuwaiti camp within three days. The news agency said in follow-up tweets that it âcategorically deniesâ the report that was published on its social media account and that Kuwaitâs Ministry of Information is investigating the issue.
- The US Army issued a warning against “fraudulent” text messages that it says have been sent to various users claiming that the recipients have been selected for a military draft, according to a report from Business Insider. A spokesperson from US Army Recruiting Command (USAREC), the organization responsible for attracting prospective soldiers, told Insider the text messages were being sent “across the country from different brigades” this week.
- Sam Thielman writes for CJR about Peter Hegseth, the Fox News personality, Iraq War veteran, and one-time guard at the Guantanamo Bay detention camp. Hegseth often receives compliments for his commentary on Fox & Friends from President Trump, who once considered making Hegseth the head of the Department of Veterans Affairs. Hegseth used his platform to lobby Trump to pardon three men who were accused or convicted of murder while deployed to Iraq or Afghanistan, Thielman writes, including former Navy Seal Eddie Gallagher.
- Nieman Lab’s Josh Benton writes about a bill that both the Republicans and Democrats are supporting, which would give media companies a get-out-of-collusion free card so that they could theoretically negotiate with the big digital platforms. Unfortunately, Benton argues, news content “isnât nearly as important to Google and Facebook as publishers think it is, and therefore even if an antitrust exemption for news “is what ends up bringing all of Congress together for a few hours of Kumbaya, donât expect it to make much of a difference.”
- Barry Diller’s media company IAC has sold its CollegeHumor branded sites, according to a report in Variety, and more than 100 staff will lose their jobs. The sites have been sold to the former chief creative officer for CollegeHumor, Sam Reich, who said on Twitter that the sites would need to “take on bold new creative directions in order to survive [and] you may not agree with all of them.” Reich asked users and fans to help support the site, which he said was losing money.
- Spotify is going to start using the data it has about the interestsâand locationsâof its users to insert ads into its music streams, according to a report from The Verge. With technology itâs calling Streaming Ad Insertion, Spotify says it will begin inserting ads into its shows in real-time, based on what it knows about its users, like where theyâre located, what type of device they use, and their age. Such systems are relatively well established on the web, but are still not in widespread use in the podcasting industry.
ICYMI:Â Sleepwalking into 2020
Has America ever needed a media defender more than now? Help us by joining CJR today.