Join us
The Media Today

The potential and peril of AI in the newsroom

October 25, 2023
Art by Darrel Frost (composite of art generated in Midjourney)

Sign up for The Media Today, CJR’s daily newsletter.

Jesus is in midair, a basketball in his right hand, sneakers on his feet. It looks like a Renaissance painting, except the ball, the net, even the hordes of people looking up at him are at uncanny angles. That’s because this image, composed by Darrel Frost for CJR’s new Business Model Issue, was developed using Midjourney, a tool for generating images with AI. Indeed, all the art for the issue started life as an AI generation, a process that presented both opportunities and challenges. “Producing an image of Jesus dunking a basketball is easy,” Frost writes. “Getting the right image of Jesus dunking a basketball is much harder.”

The technical and ethical questions around the use of AI-generated art in journalism—What are its limitations? Even if there weren’t any limitations, wouldn’t it be wrong to cut humans out of the equation entirely?—are representative of much broader debates that have played out in the media industry, not to mention wider society, as the power of generative AI tools has grown rapidly in recent months. As Hamilton Nolan puts it, also in a piece for the issue, AI has gone from “sci-fi plot point to ubiquitous consumer product in less than a year.” The technology could help journalists with day-to-day tasks—indeed, it is already doing so in many newsrooms—but, as Nolan writes, “its role needs to be carefully negotiated, its deployment guided by an ethical code that has yet to be written.” Nolan, an elected council member of the Writers Guild of America, East, sees a key role for unions in this negotiation: rallying a broad coalition around the shared conviction that “journalism must always be by and for human beings.”

Recently, Frost and Nolan met virtually (how else?) to discuss the potential and peril of AI in the newsroom, how we should talk about the technology, and CJR’s own experiments with it. Their conversation, which was typed over Google Docs, has been edited for length and clarity. —Jon Allsop


DF: “AI” is an umbrella term that covers everything from large language models to image generators to algorithms powering myriad business tools. How should we think about this variety? Are there universal ethical values that we should apply, or do we need a more nuanced approach?

HN: There are definitely universal values governing ethical journalism itself—truth, accuracy, accountability; the things we’re all familiar with—and we’re always responsible for figuring out how any new technology can be used in accordance with those values. The journalism world has had to ask itself these questions with every new invention that changed how we work, from cameras to radio to television to the internet. “AI,” inclusive of all of the things you mentioned, feels like more of a challenge than most technologies of the past, in part because it’s evolving with so much speed—it went from more or less sci-fi to widespread public adoption in the space of a year or two. The media is very much making up standards around this stuff as we go, and it’s still a work in progress. But the speed of change and the broadening capabilities of these AI tools make establishing some actual guidelines very soon even more important. Because if we don’t do it now, all this stuff gets introduced into newsrooms with very little well-thought-out guidance, and it inevitably becomes a race to the bottom. The least scrupulous actors end up defining the boundaries of what should be done. 

I think you’re right about the pace—it’s hard to be thoughtful at 186,000 miles per second—but some combination of that speed and our fear leads many conversations to end at “AI is bad,” which is reductive. What are some good uses for AI in journalism that you’ve seen (or that you can imagine)?

Sign up for CJR’s daily email

I don’t have a crystal ball, but it’s clearly going to be a really powerful tool for doing all types of busywork, like sorting through and summarizing and organizing large amounts of data. It seems to already have reached the level of being able to do the work of a semi-competent research assistant. That’s fine! Streamlining the logistical, busywork, back-end side of reporting seems like a perfectly good use. But I would distinguish that from the front-end, creative, thoughtful side of journalism, which is something that we as journalists should always be able to explain and be accountable for. It seems to me that there is a qualitative ethical difference between using AI as a research assistant, or using it to come up with stuff that serves as some kind of creative inspiration, and using it to actually do creative work like writing or illustration. It’s a tool, and it should always be treated as a tool, not as a replacement for the human mind. Because the “creativity” of AI itself is really just an illusion based on stealing and remixing creative work that humans have already done. 

Last week, CJR’s Yona Roberts Golding reported on how media organizations are navigating their relationships with large language models, which train themselves on bodies of text from the internet, and trying to figure out whether and how these organizations should police their copyright as AI companies crawl through their content for this purpose. How do you think journalists should be weighing these intellectual-property questions?

The smart move is for journalists—and authors and artists and media companies more broadly—to try to enforce their copyright claims to the absolute max right now. That is a piece of leverage over the AI companies that is going to disappear if we don’t use it. Right now, the legitimacy of their practice of scraping all this creative output with no permission or compensation still seems like an open question, legally and otherwise. But the more time goes by, the more normalized the AI companies become, the more deeply they become embedded in daily use, and the harder it’s going to be to turn back that clock in any meaningful way. So it is absolutely important to push all of these legal claims now. A victory on this issue could really rewrite the future of AI, at least in terms of who benefits from it. 

One of the key questions, as with so much on the internet, is Who controls the tools? And, ultimately, Whom are the tools benefiting? I was struck by a quote in your piece from a media executive who said that it was his “fiduciary responsibility” to use AI, as if he doesn’t have a responsibility to employees or readers. 

That is the relevant question. The reason why unions are so critical when it comes to the real-world regulation of AI is that they are the only bodies with the power, right this minute, to ensure that the control and benefit of these tools are not only in the hands of company managers and investors. Responsible news outlets that take things like journalistic ethics very seriously will want to have a thoughtful process to integrate AI into newsrooms—but there are plenty of companies in this industry (hello, G/O Media) where the owners and CEOs will just rush the implementation of AI in whatever way they think will make them the most money the fastest. We’ve already seen that happening in places: bosses have overruled their own newsrooms to publish AI-generated content that has turned out to be embarrassing and full of errors, and have then had to walk it back. There are responsible owners in media, but, as a lot of union members in this industry can tell you, there are also a lot of owners who see journalism as just another business where maximization of profit is everything. If we allow those people to have unfettered control over how AI gets rolled out, it will be a race to the bottom for all of us. Union contracts are the wall against that. 

We tried to integrate AI into the CJR newsroom for this issue, actually, at least with the illustrations for each article. We share the values you’ve discussed here, so we used generative-art tools to augment our work, not to replace human creativity. (I wrote about the process here.) In general it was… exciting! It was cool to see editorial ideas literally take shape at the click of a button, allowing us to hone our concepts before hiring artists to develop them. Sure, the machine is just remixing existing human-made art, as you noted earlier, but the upside of that is that using these tools is like brainstorming with a hive mind; it’s like having a hundred assistants as you pull together a mood board. That’s pretty nice!

The downside is that the outputs are ideal as visual concepts and mediocre as actual art. We occasionally had some brilliant images pop out—even a broken AI is right twice a day—and obviously the tools will improve over time, but my main takeaway after using them in the past few months was that, as I wrote in my piece, “to call these systems ‘artificial intelligence’ would be to mislabel them.” They’re just… not that smart? Less like wands and more like chisels (and blunt ones, at that). I guess I’m surprised that anyone would think these tools are ready for prime time, regardless of the ethical questions. 

It’s a really interesting experiment, and probably highlights the fact that we all have to be a little humble on AI. Not only is it hard to predict what it will evolve into, but almost none of us can honestly say that we understand the processes it uses to produce what it does now. It’s effectively a magic black box that will grow increasingly sophisticated and refined in spitting out… something. That refinement may just be that these systems become better at deceiving us by giving the appearance of thought; I don’t know. The only guideline that really makes sense to me—whether AI is crude, as you found, or much more sophisticated—is that it’s a tool that needs to stay in our toolbox. It should never take the place of a human mind. 


Other notable stories:

ICYMI: Israel wants to shutter Al Jazeera. Will it stop there?

Has America ever needed a media defender more than now? Help us by joining CJR today.

Darrel Frost and Hamilton Nolan are the authors of this article. Darrel Frost is a designer and writer living in New York City. Hamilton Nolan is a CJR contributor who also writes regularly for The Guardian, In These Times, and Defector.