Sign up for The Media Today, CJR’s daily newsletter.
In 2016, the Oxford English Dictionary made ‘post-truth’ its word of the year, signaling a turn in Western politics towards not just competing narratives based on a shared understanding of the facts, but of competing facts themselves. The ‘paranoid style in American politics’ that Richard Hofstadter wrote about in 1964—of a conspiratorial mindset baked into the body politic—has been supercharged by the internet and spread around the globe in recent years. From headlines about Covid’s origin to outright conspiracy theories that reptilians control the earth, most people have been exposed to false information online. Just this week, a survey by Savanta for King’s College London and the BBC found almost a quarter of the UK population believes Covid-19 was probably or definitely a hoax.
Sander van der Linden, a social psychology professor at the University of Cambridge, studies how to “prebunk” or “inoculate” people against misinformation and conspiracy theories. In his new book, Foolproof, van der Linden compares misinformation to a virus that can infect people and spread within and between networks. The more it’s shared online, the more transmittable it becomes. Research shows that once someone is exposed to misinformation, it can latch onto our brain and dig deep into our unconsciousness, making it extremely difficult to undo.
Fact-checking people about their misbeliefs after exposure isn’t always the most efficient way of correcting them. In some cases, it can even radicalize them further. That’s why van der Linden advocates for ‘prebunking.’ Prebunking exposes people to a weakened dose of the strategies used to manipulate them. Actively engaging with the content creates anti-genes that protect against false information. As van der Linden describes it in the book, it’s better to “play offense rather than defense.”
Van der Linden has also created the online game Bad News, in which the player is tasked with producing fake news content on a Twitter-like platform using common manipulation strategies—for instance, false dichotomies and impersonation—while gaining as many followers as possible. I recently talked with van der Linden about his new book, why sometimes suggestive news can be more harmful than entirely fabricated stories, and how innovations in AI can influence misinformation. Our conversation has been edited for length and clarity.
SG: First of all, what motivated you to write Foolproof?
VDL: One of the things I thought we need to do as psychological scientists, who are funded mostly by people’s tax money, is to inform people about the findings that are relevant to their daily lives. Two, half the book is about the problem, but I also wanted it to have solutions and to give people practical tools and tips. Three, I think it’s useful for people to know what’s going on in social media companies and what governments are doing. I think my experience working with these actors outside of academia would give some insight into what is happening and why it’s such a difficult problem to address.
The metaphor of misinformation as a virus runs throughout the book. What about that idea compelled you?
When I first started, I was very into modeling human behavior. One of the interesting things that I was studying was the propagation of information on the internet and social media. There are a lot of interesting papers in physics and other fields that use models from epidemiology to study how information spreads in networks. It turns out that you can use the exact same models that are used to study the spread of viruses to study the spread of misinformation.
If you think of news as a contagion, then there are nodes and links. Patient Zero gets infected with a fake news story, and they talk to somebody else who gets the virus. You can apply those models to understand the spread of misinformation on social media. Sometimes they need to be more complex and adapted to have a few more parameters and things like that. But the main point is that it’s literal.
I was intrigued that just as the body needs a lot of copies of the potential invaders to mount an effective immune response, a lot of experimental research shows that it works the same with the human mind, too. If you want to build immunity, people need to know what manipulation looks like.
You mention that news that is suggestive without being completely false can be the most dangerous form of misinformation. Why is that?
Studies have tried to quantify how much fake news exists, and it’s somewhat misleading. Some claim only one percent is fake, and others say it’s more like one to ten percent of the overall news ecosystem. But that’s based on a ridiculous definition of fake news: entirely fabricated stuff. While some people believe the earth is flat, that’s not the bulk of the problem. The real issue is biased or manipulated news that uses some of these techniques and often revolves around a grain of truth. This constitutes a much larger portion of people’s media diet.
I give an example in the book of a story from the Chicago Tribune titled “A ‘healthy’ doctor died two weeks after getting a Covid-19 vaccine; CDC is investigating why.” A healthy doctor did die two weeks after getting the Covid vaccine. So from a pure fact-checking perspective, the headline wasn’t false. The Chicago Tribune is also not on any list of untrustworthy websites, and it doesn’t fit the mold for the usual type of fake news story. But the story uses the manipulation technique of connecting two unrelated events and making them seem as if they’re related to influence people’s feelings about the vaccine. These stories are more difficult to address because you can’t just say, ‘Oh, that’s totally crazy.’ You have to deal with the fact that there’s some grain of truth. But actually, it’s misleading.
The surge of new AI tools such as ChatGPT and the image generator Dall-E has spurred worry about a new type of misinformation and fake news. What are some ways to address that?
We actually used ChatGPT before it was cool to generate fake headlines. We were trying to solve an unrelated problem which was how to expose people to fake news they’ve never seen before to rule out any memory familiarity confound in our research design. So we had ChatGPT generate hundreds of misinformation headlines based on conspiracy theories. It was remarkable what it was spitting out; they seemed like real headlines that were misleading or false.
We used that to validate a scale called the ‘misinformation susceptibility test,’ the first standardized psychometric test people can take to understand how susceptible they are to misinformation. So while AI is part of the problem, it can also become a solution by helping people understand their susceptibility.
On a more practical note, ChatGPT is efficient at generating prebunks and it understands inoculation well. Some people have thought about using it for debunking and fact-checking. So I think we can leverage AI to mass produce beneficial or accurate content. But it will always be a race because some people will try to use it for deceptive purposes, such as generating misinformation. That’s why we need to think about prevention.