Sign up for The Media Today, CJRâs daily newsletter.
The question of whether AI will change the news has kept journalists on their toes since the release of ChatGPT. Some hope it will be the next big thing that swoops in to âsaveâ journalism, while others fear it will replace it. In the past month, several newsrooms have teamed up with technology companies to use their AI tools: Microsoft is collaborating with Semafor to create news stories using an AI chatbot, and Google is paying local newsrooms to test an unreleased generative AI platform, according to Adweek.
Tow fellow Felix Simon, a doctoral student at the Oxford Internet Institute, recently published a report examining the impact of AI on newsrooms. Despite the hype, he finds that many of the most beneficial applications of AI in news are relatively mundane. Drawing on over four years of research and 170 interviews with industry experts, Simon argues that, for now, weâre witnessing more of a retooling of the news through AI rather than a fundamental change in the needs and goals of news organizations.
Simonâs report also highlights the risk of newsroomsâ becoming increasingly reliant on third-party AI offerings as the gap between newsrooms and technology companies narrows. While well-resourced publishers like the New York Times can invest in in-house AI development, most newsrooms depend on technology companies due to the high associated costs. Simon refers to this as âlock inâ effects, which shift power toward technology companies and risk diminishing newsroom autonomy.
Ultimately, Simon concludes that the future remains uncertain. The impact of AI on the news will largely depend on decisions made by news organizations and managers. Read the full report here.Â
Tow interviewed Felix Simon, discussing how AI will reshape journalism, the uneven benefits for newsrooms, and the implications of relying on technology companies for AI.
This conversation has been edited for length and clarity.Â
You began interviewing experts in July 2021, long before ChatGPT hyped the world up about generative AI. What led you to dedicate your dissertation to AI in journalism?
I wanted to do this report because it tied in nicely with my Ph.D. work at the Oxford Internet Institute. I started that under the tutelage of Gina Neff and Ralph Schroeder in October 2019. In the beginning, I wasn’t all that excited by AI. It was interesting enough to research, but it didn’t necessarily strike me as a big deal. Over time, I gradually realized there was something there, that this was a topic that can make a big difference to journalism and the information environment. So I gradually fell in love with my thesis topic, even though I was initially skeptical.
How stressful was it to adapt your research to the rapid developments in AI?Â
Not actually that stressful, because to me, generative AI is mostly an extension of things that have already happened. The way generative AI and large language models get applied in a journalistic context is not that different from previous forms of AI. Granted, it has capabilities that were beyond reach with previous forms of AI, and its accessibility is a big leap forward. But it is still sort of stacked on top of existing approaches. And that’s also what I argued in the report. Itâs more of a retooling. It doesn’t necessarily change the underlying needs and goals of news organizationsâit changes the means.
You mention that the adoption of AI is pushing news organizations and the public sphere toward greater rationalization and calculability. Could you elaborate on the meanings of these terms?
Rationalization is one of the core themes trailing through the work of German sociologist Max Weber. It’s the idea that through the application of scientific methods and cold, hard technology in everyday life, we become more rationalized and more driven by a logic that is all about organizing life according to principles of efficiency and predictability. Itâs the act of removing human autonomy and our own ability to freely decide and make choices based on, for example, our gut feelings. AI can be seen as a technology that does these things, too. And as these technologies become more pervasive, they form an âiron cageâ around us from which it is harder to escape, so to speak.
With large language models or chatbots this might be harder to see, as they come across as something we can easily communicate with in human terms. As sociologist Elena Esposito says, it’s a form of artificial communicationâbut it is still underneath a technology imbued with scientific methods. And as we can currently see, they move more strongly into news work and the digital infrastructures that make up the public arena, our information ecosystem. It removes, to varying degrees, the human out of the production of information. Only the future can tell to what extent and how much that matters, but it strikes me as a significant development.
A significant portion of the report highlights the risks that emerge when news organizations become overly reliant on technology giants such as Google and Meta for AI. Could you provide some examples of why this heavy dependency poses challenges for journalism and public discourse?
The idea in journalism circles is that a core feature of journalism is its autonomy from other factors. That could be the state, businesses, or parts of the public. In the last two decades, journalistic institutions have become dependent on technology companies and their digital infrastructures for the distribution of content and reaching audiences.
The argument I’m making is that AI is accelerating and exacerbating that existing dependency on the technological level. And the question, of course, is Does this matter? I think it does to some extent. Suppose news organizations, at some point in the future, rely fairly strongly on these AI services and infrastructures for lots of their work but donât control them. In that case, if tech companies decide to raise prices or change the conditions of use, or if itâs impossible for you to understand how consistent and reliable these systems are, then that’s something that can hinder you as an organization from doing your work. And that has knock-on effects on the kind of news that ends up in the public arena and could contribute to a situation where we will have even less quality information than at the moment.
Using their AI might also weaken the news as an institution because tasks that once were central to the news are taken over by platform companies. Take Google. Their stated aim is to âorganize the worldâs information and make it universally accessible and usefulâ⌠and to make money off that. Large-scale AI systems for information processing and retrieval are key here. And it is vital to continually improve them. This can happen through more data, or through publishers using their systems and helping to train them.
You found that many of the innovations AI brings to news production are relatively mundane compared with the hype. Currently, the most efficient use cases include dynamic paywalls, automated transcription, and data analysis tools. Will more exciting and groundbreaking tools emerge with time? Or is the field of journalism not as easily automated as some other, more technical domains?
The first thing I should say is that my sample is limited. I’m only looking at three countries and certain large organizations. There’s a whole world outside of the US, UK, and Germany. Having said that, there is potential for this technology to be used in new and innovative ways beyond things such as transcription. I expect that we will see other uses becoming more streamlined and move out of the experimentation phase and into the implementation phase. For example, take Rappler in the Philippines and the Daily Maverick in South Africa, which introduced AI-generated summaries of longer news content. Go back just three years and that sounded magical, because the AI models weren’t quite there yet. Now we just shrug at such things because they feel so normal.
I think thatâs a key thing to remember here: We constantly move the goalposts around what counts as exciting, groundbreaking, or innovative, and there is probably a lot more to come. Then again, some things will always resist automation. Building a human connection when reporting is something you cannot simply let a machine do.
Your report indicates that AI disproportionately benefits large and affluent newsrooms, while smaller news organizationsâparticularly those in the Global Southâstruggle to keep pace. What are the consequences of some newsrooms becoming AI-savvy while others remain in the dark? Â
The consequence is that if you’re able to make use of the technology in a smart way for your organization early, you potentially have an advantage over others. And there’s a chance you will emerge as what I call a âwinner.â If you can do that, say, a year earlier, that potentially allows you to pull ahead of competitors. Now, this is not a hard-and-fast rule. There will always be exceptions. There are lots of newsrooms in the âGlobal Southâ and elsewhereânational, regional, localâthat are quite inventive and can do something with technology without having lots of financial resources. So it’s not necessarily a foregone conclusion. But the likelihood that you can fully use this technology to your advantageâand also negotiate better terms with platformsâis greater if you have a big budget, time to experiment, and resources to spare.
You have conducted an impressive number of interviews for this report, about 170 with journalists, academics, and other news industry experts. Is there something that you took away from those interviews that wasn’t necessarily covered in the report?Â
I think thereâs an overwhelming sense of⌠I don’t want to say fear. Fear is probably too strong of a word. But, this sort of⌠dread, yes. That all this goes the same way as the World Wide Web and social mediaâseemingly limitless opportunities at the beginning quickly followed by a rude awakening. Even among the people who were optimistic about the technology, I got this sense that they were still quite worried about the future of the news. This collective sense of This could go awfully wrong if we don’t take the right steps. And who can blame them? Of course, there was big disagreement on what the right steps are and who should take them.
The one thing I didn’t find was any sort of discussion of whether [generative AI] is conscious; is it an extinction risk if we keep developing artificial intelligence? I can’t remember a single interview where someone seriously stood by that point. Which is interesting, because itâs a strong juxtaposition with some of the media coverage on the topic. Most of them just saw it as an extension of knowledge work or as a tool, but not as this thing thatâs on a surefire trajectory to superintelligence that will replace us within three years.
Has America ever needed a media defender more than now? Help us by joining CJR today.