Join us
Tow Center

The bots beat: How not to get punked by automation

April 3, 2018
 

Sign up for The Media Today, CJR’s daily newsletter.

Bots are everywhere. A report by The New York Times recently outed about 3.5 million of them on Twitter. But they’re also on Facebook and Reddit, and weaving their way into government processes. You might even be following some (or they’re following you).

Bots are not all bad. Some of them contribute worthwhile information, critique, or new modes of storytelling. For instance, as CJR covered last week, the Fort Collins Coloradoan produced the Elexi Facebook chatbot for the 2016 elections as a way to provide information about local candidates and races in a more conversational way. But they can easily be set to bully, intimidate, harass, pollute, and push political agendas online.

If humanity is going to retake social media and push back against the tide of automated attention-manipulation, journalists need to get smarter themselves so as not to fall prey to these electric demons—and start covering bots, and their changing strategies, as a beat.

 

I can think of at least four ways that bots co-opt public media and attention. Bots can manipulate the credibility of people or issues, they can amplify and spread propaganda and junk news, they can dampen or suppress opposition and debate, and they can intimidate or deny access to authentic people who want to participate. Let’s look at each of these in turn.

Bots manipulate credibility by influencing social signals like the number of aggregated likes or shares a post or user receives. People see a large number of retweets on a post and read it as a genuine signal of authentic traction in the marketplace of ideas. Do not fall for this. Trends are basically over—they’re too easy to manipulate. This goes for any information online that feeds off of public signals, including things like search autocomplete or content recommendation lists. Journalists can no longer rely on information sources reflecting some form of online “popularity.”

Sign up for CJR’s daily email

Instead, consider how to report on which trends are being manipulated and in what direction. The Hamilton 68 dashboard tracks about 600 Twitter accounts linked to Russian influence operations. Hashtags and topics trending within that constrained network are surfaced and ranked, drawing attention to those that may be subject to influence campaigns. If journalists were to do something similar, however, they would need to be more open and transparent about which accounts are being monitored.

Information junk can overwhelm attention, too, distracting from what’s important or reframing a conversation to steer it in some deliberate direction. One type of attack involves targeting specific users, sometimes journalists, with unsolicited messages to get them to pay attention to the manipulator’s agenda. Research shows this strategy does, in fact, make claims more contagious. If multiple “people” are “independently” sending you a link to something, assume you are a target. Double check before amplifying or using that information in further reporting. If you are suspicious, look at the network of the accounts sending the information to you—if they’re connected, they might be colluding, or be part of a network of bots.

If multiple “people” are “independently” sending you a link to something, assume you are a target.

Bots can also be tools of opposition and suppression, dampening particular ideas or positions and limiting their spread by influencing attention patterns. One botnet in Syria was used to direct attention away from the Syrian civil war to other Syria-related topics. Bots can also jam trends or hashtags with “noise,” making it difficult for communities to coalesce, or they can pump up the volume on preferred hashtags to make them more likely to trend. BuzzFeed News found a Russian hacker who offered to make German election-related hashtags trend on Twitter for 1,000 Euros. All this extra noise makes it more difficult to see what authentic people are talking about. Again, given the ease with which trends can be manipulated, they’re not to be trusted.

Bots and their cousins—cyborg trolls—can also be bullies. They intimidate, harass, or otherwise make it difficult to use a social platform. Last summer, two fake Twitter profiles were created to intimidate staffers at the Digital Forensics Research Lab (DFRL). Using genuine photos of two DFRL employees, one fake account tweeted a piece of false information indicating that the second staffer was dead: “Our beloved friend and colleague Ben Nimmo passed away this morning. Ben, we will never forget you. May God give you eternal rest. RIP!  — @MaxCzuperski.” This was then retweeted more than 21,000 times by colluding bots trying to spread the false information and shock the target.  

Another way bots attack individuals involves flooding the notifications a target receives from social platforms. This produces a sort of attentional denial of service attack (a type of cyber attack meant to make a resource unavailable to a target), making it difficult to pay attention to real messages. To cope with this type of attack journalists might flip on Twitter’s “quality filter” in their notifications settings, which “filters lower-quality content from your notifications, for example, duplicate Tweets or content that appears to be automated.” Alternatively bots may all follow an account at the same time, or report the account as spam, triggering an account suspension. Twitter offers various routes to unsuspend an account, but this still creates friction for legitimate users who have been targeted.

 

It’s difficult for journalists to grapple with many of these issues on their own. Platforms must be involved in solutions here, particularly because they are sitting on the data that’s necessary to develop robust and timely responses.

An array of signals and cues like the use of a stolen image, when an account was created, the geographic distribution of its followers, and other visualizations of account behavior can all be helpful in determining whether an account is a bot. Platforms should be building these various forensics lens into their tools (e.g., Tweetdeck in Twitter’s case) so journalists can flag suspicious accounts—and funding fellowships so that more reporters can cover the bots beat. This would facilitate future automatic bot detection and removal efforts.

With enough access to data, computational journalists could even contribute directly to the development of techniques to help identify bots at scale. Journalists who observe or are the target of a bot attack might also be able to identify patterns to translate into data mining techniques that make it possible to thwart this type of attack in the future. Those covering the bots beat will be the frontline observers, able to recognize automated information attacks and help develop effective countermeasures.

A lot can already be covered on the bots beat, however, using bot-detectors that grade accounts on how bot-like they appear. For instance, the Quartz News’ @probabot searches for accounts talking about politics and flags the ones with high Botometer scores. In a Blade-Runner-esque display where one bots hunts others, it locates accounts likely to be bots and calls them out. Bot scores from Botometer, or an alternative like DeBot, can also benefit other forms of coverage on the beat. Surfacing the amount of automated support given to a politician’s or other influencer’s messages might discourage manipulation, for example. Swiss Radio and Television (SRF) investigated bots on Instagram and showed that almost a third of the followers of the influencers analyzed were fake. The degree to which hashtags are manipulated might be something for journalists to investigate. News organizations might develop coordinated and shared resources so that member journalists don’t make the mistake of reporting on an obviously manipulated hashtag.

More coverage of this beat can only make platforms better and safer information environments for public discussion, debate, and dialogue. Combining platform resources and data with journalists’ expertise looks like the most effective path forward.

ICYMI: Meet the journalist tracking Digital First Media’s hedge fund owner

Has America ever needed a media defender more than now? Help us by joining CJR today.

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »