Sign up for The Media Today, CJR’s daily newsletter.
You may have heard the alarming news that world-changing artificial intelligence is coming very soon. Last month, the New York Times columnist Ezra Klein published a conversation with Ben Buchanan, the Biden administration’s AI adviser, in which Klein said, “Person after person—from artificial intelligence labs, from government—has been coming to me saying: It’s really about to happen.” According to Klein’s sources, artificial general intelligence, or “AGI,” which is generally used to refer to an AI system that can match or exceed humans at any cognitive task, will arrive in the next two to three years.
Shortly afterward, the Times technology columnist Kevin Roose warned that journalists, institutions, and civilians—everyone, basically—is underestimating AGI. “I’ve come to believe that what’s happening in AI right now is bigger than most people understand,” he wrote. The prominent tech journalist Casey Newton had already argued that skeptics of AI downplay its capabilities at their own risk. Recently, The New Yorker’s Joshua Rothman took a similar view, urging those outside the tech industry to enter the discussion around AGI. All of these journalists are issuing a call for attention that is directed particularly at other progressive journalists and policymakers. AGI has become the biggest, most urgent, most consequential story taking place right now, they say. Ignore it at your peril.
How literally should we take these predictions? Max Read, a journalist and tech critic who publishes the newsletter Read Max, offers a stepped-back perspective. Read has connected the latest shift in mood to a cycle of AI hype and skepticism going back to 2022, when OpenAI publicly released ChatGPT, and insists that journalists be precise about what they mean when they publish predictions about AI. Recently, I talked to Read about who the AI skeptics really are, what “AGI” means, and who benefits when journalists lend credibility to industry hype. Our conversation has been edited for length and clarity.
CB: The latest pendulum swing of the AI hype cycle, as you’ve identified it, is a call from prominent journalists to expect the arrival of so-called AGI. Their argument is that AI is now the biggest, most urgent story of our time. Do you agree with that?
MR: I don’t fault anybody who wants to make that bet. But it’s also hard to read the news and feel confident that looking back fifty or a hundred years from now, AI is going to be seen as the most important thing. I don’t want to downplay the advances that are being made, and I really don’t want to downplay the ways this technology is being integrated into politics, business, academia, and all kinds of important fields in ways that we won’t fully see for twenty-five years. But I’m a little skeptical that in the era of a ramp-up to a second Cold War, ongoing genocide in Palestine, the dismantling of the American state, that we’re necessarily going to think it was all these chatbots that turned out to be the biggest thing.
What about the argument that although we can’t be sure what AI will bring, it would be a serious mistake to underestimate it?
I don’t think journalists have been underestimating AI. It doesn’t seem to me that there’s a homogeneous reaction. There are journalists who are skeptics on the technical level: This stuff can’t do what they say it’s going to do. There are journalists who are skeptical on the financial business model level: This stuff is impressive, but nobody’s ever going to make a buck off of it. There are journalists, as well, who are hyped on one or both of those levels. It’s worth pointing out that the most prominent tech and tech-related columnists at the biggest papers in the country are unquestionably non-skeptics. The Times, just to take the most obvious example, has been a huge driver of this most recent AI hype cycle. You look at any other publication on the same establishment level, and they have taken a similar attitude. There are a lot of people on Twitter and Bluesky who are deeply anti-AI, who are holding a hard line of skepticism, maybe less because of empirical reasons and more for political reasons. If you are Kevin Roose, probably every time you tweet about AI, your mentions are filled with people saying this technology is a Ponzi scheme, it’s bullshit, it’s a scam, it doesn’t work. So I think it’s really easy to imagine that you’re facing down a particular kind of skepticism.
Where I see a ton of skepticism is from writers, especially novelists and literary writers, really challenging the idea that AI can replace human writing in any meaningful way. Journalists are writers, obviously, and we like to think that we surface novel information that doesn’t already exist and isn’t in the public record yet. So I wonder if we tend to have a reflexive skepticism about the capabilities of AI and its potential ability to replace us, because that’s undesirable.
That’s the other group of people that I see being vehemently anti-AI. Brian Merchant’s work is useful here because he frames this criticism in terms of Luddite movements, where you are taking strong anti-technological stances—not because the technology is fraudulent or doesn’t work but because it meaningfully affects your ability to make a living. I see artists, illustrators—especially those who work for commission—having the same general feeling about AI that writers do. I also see it to some extent in programmers. It’s hard for journalists to trust that the tech industry has our best interests in mind. You will find many tech people openly saying, This is going to put you out of business, and I’m glad. It’s good for the world. Also, if you have been working in this industry for the past twenty years, you’ve lived through two waves of disruption—through massive venture capital investments in digital media and through the so-called pivot to video, which have not transformed the practice of doing journalism even as they have worsened the business at every turn. All of us have in very close memory an understanding of what happens when software companies get involved, and it’s never a good thing.
Nicholas Thompson, the CEO of The Atlantic, has talked about doing a deal with OpenAI. During the pivot to video, he said, media companies made the mistake of adapting their content to Facebook’s agenda, thus becoming vulnerable to changes. Now he’s saying, let’s make a deal where media companies benefit from the content they are already producing and can actively negotiate how AI becomes part of the media business.
I understand why management would take that position; strategically, I think it is probably the smart move. We know our stuff is getting stolen. Not everybody has the resources of the New York Times [which is suing OpenAI, the maker of ChatGPT, for copyright infringement] to fight back. I hope the Times is successful, because it will set a precedent and help the rest of us. I would like to think The Atlantic would want to do the same thing; I also see that you don’t necessarily want to get caught in a five-year lawsuit. But what is good for management, what is good for the institution of The Atlantic, is not necessarily good for writers. Last year, I wrote a piece for New York magazine on AI slop. Vox Media, which owns New York, also has a deal with OpenAI to license articles. I asked to be excluded from that license. I had to get on the phone with a bunch of lawyers at Vox to basically be told that not only could I not be excluded from this license, but I also couldn’t be told what my work was being used for. I signed off on it because I needed the money, but I found it pretty troubling. It reminds me of the way that freelancers used to get movie and TV rights, and increasingly now magazines are trying to get that extra little bit of dough for themselves.
The other problem is the total lack of transparency. How much money is my individual article worth? OpenAI has apparently been offering one million to five million dollars to license archives, which, let’s be honest, is no money at all. I did some back-of-the-envelope math in terms of high-quality texts that are being trained for corpuses. The article that I wrote for New York is worth maybe five to fifty dollars to OpenAI. One reason writers don’t have power is that you have to have a big corpus of text, the way the Times or New York or The Atlantic does, for it to be worth any kind of money. It needs to be much clearer to writers, to readers, to subscribers, to editors: What exactly are the dollar figures involved? What models are being trained on corpuses that include my text? How is my text going to be identified? All that stuff. I can see why institutions feel they have to make this call. But as a writer, I feel frustrated. What it seems like to me is more people are finding ways to make money off of my writing, without me seeing any of it.
You have beef with the term “AGI.” Can you tell me why?
I don’t think it means anything. There is no consistent definition that is widely accepted and that has a measurable component where you can say, “We have reached AGI.” I have trouble with the way AGI is talked about as a threshold. People think of it like Dr. Frankenstein throwing the switch and waking up the monster. This is nonsensical; the idea that there’s a moment when everybody in the world is going to wake up and all agree, Oh my God, we’ve reached AGI—which is derived from a bunch of movies and stories we’ve read—is a fundamental misconception of what’s happening in the AI space. What’s worse is that it confuses the reader. We need to have a lot of clarity about what exactly we’re talking about. The process by which AI systems develop, are deployed, and become important factors in business, culture, and politics is ongoing. We should be paying attention to the ways it’s already happening.
My one hope is that we can leave the concept of AGI in the rearview mirror. I concede to Ezra and Kevin that this is how people in the AI industry talk about it. But part of our role as journalists should be to put some distance between ourselves and the people in the industry, not uncritically adopting the frameworks that they’re using to talk about their things. My final critique is that there are people who directly benefit from hyping up AGI. When you imply that we’re on the precipice of an economically transformative technology, you are helping their bottom line. You’re helping them raise money. OpenAI’s deal with Microsoft allows it to sever its relationship with Microsoft when the OpenAI board declares that AGI has been achieved. So to the extent that you use your platform to say that AGI is a real thing, you are doing Sam Altman [the CEO of OpenAI] a big fucking favor. Those are some good reasons to be thinking about the language we use. You can drop AGI as a framework without saying that AI doesn’t work, that it’s not powerful, that it’s never going to make money. You can say all that stuff—just don’t say, Are you feeling the AGI? Don’t say AGI is coming. You become part of the industry when you do that.
This technology is specifically geared toward powerful people and elites in a way that hasn’t been true of other internet software developments. It doesn’t really matter if the president is all in on Facebook or not; it’s gonna live or die based on whether or not users sign up. But there are a lot of bosses and politicians and CEOs and investors salivating over this technology because they are hearing that it’s going to allow them to cut labor costs. Whether or not AI is being used to replace a bunch of people’s jobs is going to be the result of decisions made by a small handful of people with whom journalists still have a fair amount of influence.
The way that I have seen AI described most convincingly and concretely—the way that makes sense to me—is as a very big and fast extension of mass media. It’s an information system that allows us to build on existing human knowledge. Seeing it that way, you avoid the common conflation between AI and intelligence—the belief that AI is a separate intelligence that is going to be better than human intelligence—which seems to amplify a lot of the problems that you’re pointing to.
There are two pieces that I think are really good on this. One is by a Meta engineer named Colin Fraser, and it’s called “Who are we talking to when we talk to these bots?” The other article is by Henry Farrell and some coauthors, called “Large AI models are cultural and social technologies.” Their point is that it’s useful to think about large language models as something more akin to the printing press; that what we have developed is a possibly transformative way of archiving, containing, compressing human knowledge in a queryable fashion. I like this metaphor. If you want to make a case for AI being the biggest story going, it’s not in the sense that we’ve invented a new intelligence. It’s that we built the next printing press, or the next internet.
What do you think is missing in the conversation around AI? What would you like to see someone report on?
I would love to see somebody prominent who thinks the AI industry is globally transformative do a piece about why this is true and crypto wasn’t. Why did crypto not work out the way everybody promised it would? And why should you trust the promise that AI will work out when we had an insane Web3 fever dream that just collapsed? I think somebody should take that on.
Has America ever needed a media defender more than now? Help us by joining CJR today.