Join us
The Media Today

Is AI software a partner for journalism, or a disaster?

February 9, 2023
A newspaper with the headline "Artificial Intelligence" | Photo: Adobe Stock

Sign up for The Media Today, CJR’s daily newsletter.

In November, OpenAI, a company that develops artificial-intelligence software, released ChatGPT, a program that allows users to ask conversational-style questions and receive essay-style answers. It soon became clear that, unlike with some earlier chat-software programs, this one could, in a matter of seconds, generate content that was both readable and reasonably intelligent. Unsurprisingly, this caused consternation among humans who get paid to generate content that is readable and intelligent. And their concerns are reasonable: companies that make money creating such content may well see AI-powered tools as an opportunity to cut costs and increase profits, two things that companies that make money from content like to do.

AI in the media is, more broadly, having a moment. Around the same time that ChatGPT launched, CNET, a technology news site, quietly started publishing articles that were written with the help of artificial intelligence, as Futurism reported last month. A disclaimer on the site assured readers that all of the articles were checked by human editors, but as Futurism later reported, many of the CNET pieces written by the AI software not only contained errors, but in some cases were plagiarized. After these reports came out, Red Ventures—a private-equity firm that owns CNET and a number of other online publications, including Lonely Planet and Healthline—told staff that it was pausing the use of the AI software, which it said had been developed in-house.

ICYMI: Journalists Remain on Twitter, but Tweet Slightly Less

As CNET pressed pause, media companies announced plans to expand their use of AI. The Arena Group, which publishes Sports Illustrated among other magazines, is now using AI to generate articles and story ideas, according to the Wall Street Journal; Arena said that it doesn’t plan to replace journalists with AI, but to “support content workflows, video creation, newsletters, sponsored content and marketing campaigns,” according to Ross Levinsohn, the CEO and a former publisher of the Los Angeles Times. BuzzFeed, meanwhile, said that it plans to use OpenAI’s software to develop quizzes and personalize content for readers. After that news broke, BuzzFeed‘s stock more than doubled in price, a move “reminiscent of the crypto and blockchain craze five years ago when shares of a company would surge when it announced a potential partnership or entry into the popular sector,” Bloomberg’s Alicia Diaz and Gerry Smith wrote. Jonah Peretti, BuzzFeed’s CEO, told staff that the use of AI was not about “workplace reduction,” according to a spokesperson quoted by the Journal. (The Journal also reported that BuzzFeed “remains focused on human-generated journalism” in its newsroom.)

The use of AI software to create journalism didn’t begin with the rise of ChatGPT. The Associated Press has been using AI to write corporate earnings reports since 2015, since such reports are often so formulaic that they don’t require human input. (Incidentally, the AP also recently asked ChatGPT to write the president’s State of the Union speech in the style of various historical figures, including Shakespeare, Aristotle, and Mahatma Gandhi. Cleopatra: “Let us continue to work together, to strive for a better future, and to build a stronger, more prosperous Egypt.”) And Yahoo and several other content publishers have been using similar AI-powered tools for several years, to generate game summaries and corporate reports.

While the practice may not be as new as some of the commentary around it may have you believe, however, the popularity of ChatGPT, and the quality of its output, has led to a renewed debate about its potential impact on journalism. Jack Shafer, a media columnist at Politico, is relatively sanguine about the possibilities of AI-powered content software to improve their work. Journalism “doesn’t exist to give reporters and editors a paycheck,” Shafer wrote. “It exists to serve readers. If AI helps newsrooms better serve readers, they should welcome its arrival.” This will be difficult if the technology does also lead to widespread job losses, however. Max Read, a former editor at Gawker, wrote recently in his newsletter that “any story you hear about using AI is [fundamentally] a story about labor automation,” whether that involves adding tools that could help journalists do more with less or replacing humans completely.

Sign up for CJR’s daily email

Both paths, Read wrote, “suck, in my opinion.” Indeed, those who fear the ChatGPTization of journalism don’t see the problem merely as one of labor rights. Kevin Roose, of the New York Times, described AI-generated content as “pink slime” journalism on a recent episode of the Hard Fork podcast with Casey Newton, using a term that more often refers to low-quality meat products. The term “pink slime” has been used to describe low-quality journalism before, as Priyanjana Bengani has documented exhaustively for CJR; by using it to refer to AI-powered content, Roose and others seem to mean journalism that simulates human-created content without offering the real thing.

Experts, meanwhile, have said that the biggest flaw in a “large language model” like ChatGPT is that, while it is capable of mimicking human writing, it has no real understanding of what it is writing about, and so it frequently inserts errors and flights of fancy that some have referred to as “hallucinations.” Colin Fraser, a data scientist at Meta, has written that the central quality of this type of model is that “they are incurable, constant, shameless bullshitters. Every single one of them. It’s a feature, not a bug.” Gary Marcus, a professor of psychology and neuroscience at New York University, has likened this kind of software to “a giant autocomplete machine.”

Newton wrote in a recent edition of his Platformer newsletter that some of the functions for which ChatGPT and similar software will be used probably aren’t worth journalists worrying about. “If you run a men’s health site, there are only so many ways to tell your readers to eat right and get regular exercise,” Newton said. He wrote in a different edition of the newsletter, however, that these software engines could also potentially be used to generate reams of plausible-sounding misinformation. Dave Karpf, a professor of internet politics at George Washington University, wrote that the furor over ChatGPT reminds him of the hysteria around “content farms” in 2009 and 2010, when various companies paid writers tiny sums of money to generate content based on popular search terms, then monetized those articles through ads. As Karpf notes, the phenomenon appeared to spell disaster for journalism, but it was ultimately short-circuited when Google changed its search algorithm to downrank “low quality” content. (“Relying on platform monopolists to protect the public interest isn’t a great way to run a civilization,” Karpf wrote, “but it’s better than nothing.”)

Unfortunately, in this case, Google isn’t casting a skeptical eye toward AI-generated content—it is planning to get into the business itself; this week, it unveiled a new chat-based model called “Bard.” (Shakespeare obviously wasn’t busy enough writing the State of the Union.) Nor is it just Google: Microsoft is also getting into the AI software game, having recently invested ten billion dollars for a stake in OpenAI, the ChatGPT creator. This raises the possibility that search engines—which already provide answers to simple questions, such as What is the score in the Maple Leafs game?—could offer more sophisticated content without having to link to anything, potentially weakening online publishers that are already struggling. Then again, Bard made a mistake in its trial demo.

While there are some obvious reasons to be concerned about the impact of AI software on journalism, it seems a little early to say definitively whether it is bad or good. ChatGPT seems to agree: When I asked it to describe its impact on the media industry recently, it both-sidesed the question in fine journalistic style. “ChatGPT has the potential to impact the media industry in a number of ways [because] it can generate human-like text, potentially reducing the need for human writers,” it wrote. “But it may also lead to job loss and ethical concerns.”


Other notable stories:

  • Yesterday, Evan Lambert, a reporter for the TV network NewsNation, was told to stop broadcasting, then arrested, during a press conference by Mike DeWine, the governor of Ohio. Lambert was charged with criminal trespassing and disorderly conduct, according to the Washington Post; it’s not entirely clear what led to Lambert being detained, the Post reports, but video footage of the incident appeared to show him complying with the order to stop filming. A spokesperson for DeWine said that the governor had been told that Lambert was ordered to stop because “the volume of his reporting was perceived to be interfering with the event,” a rationale from which DeWine firmly distanced himself.
  • Also yesterday, the House Oversight Committee grilled former executives from Twitter on the platform’s suppression of a New York Post story about Hunter Biden’s laptop prior to the 2020 election. The executives conceded that their handling of the story was a mistake, but flatly denied Republican claims of collusion with the FBI or Joe Biden’s presidential campaign. And in other ways, CNN’s Oliver Darcy writes, the hearing “backfired in spectacular fashion” for the GOP—airing claims that Twitter accommodated Trump and that Trump himself made censorious requests of the company.
  • According to New York’s Andrew Rice, James O’Keefe, the founder and guiding light of the right-wing sting group Project Veritas, has gone on paid leave and could be ousted as the group’s leader by its board, which is set to meet tomorrow. O’Keefe’s future with the group “has become uncertain amid reports of internal turmoil, lawsuits from former employees, leaks about its internal workings, and a federal investigation into its conduct in purchasing a diary stolen from Ashley Biden, the president’s daughter,” Rice reports.
  • In 2021, Ozy, a media company, collapsed after Ben Smith, then the media columnist at the New York Times, raised serious concerns about its business practices. Max Tani reports for Semafor (where Smith is now the editor in chief) that Ozy is now seeking a comeback: Carlos Watson, Ozy’s founder, pitched potential advertisers and investors in New York yesterday, without mentioning the company’s “extremely public implosion.”
  • And Senator Mitt Romney gave McKay Coppins, a writer at The Atlantic, access to reams of his private correspondence for a book that Coppins is writing about Romney, Axios’s Mike Allen reports. The volume of material “is unheard of for a major sitting officeholder” to give away, Allen writes: “a trove historians dream of but rarely get.”

ICYMI: Rewire News Group’s editors on abortion coverage, Supreme Court reporters, and TikTok

Has America ever needed a media defender more than now? Help us by joining CJR today.

Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.