Sign up for The Media Today, CJR’s daily newsletter.
In early May, Aspen Digital—a program of the Aspen Institute, a nonprofit organization devoted to the discussion of social issues—convened nearly a hundred news executives, editors, representatives of tech companies, and others for a full-day meeting to discuss the role of artificial intelligence in the news industry. They convened panels on everything from copyright law to newsroom tools, and heard case studies on newsrooms experimenting in the face of explosive growth and the adoption of AI technologies by the public. The Aspen Institute hired me to produce a report on the findings. This newsletter is a prelude to that.
One thing that became clear from the discussions with industry experts, both at the conference and in a series of small group discussions prior to the event, is the free-floating anxiety in the media business about the impact of AI. In some cases, the concern is about AI tools scraping copyright-protected content to replace referral traffic with search-based results (a problem I wrote about last week for CJR). In others, the fear is that content created by AI will swamp the internet and overwhelm fact-based reporting with synthetic text, images, and videos.
While we are still in the early stages of the AI revolution, these concerns clearly have some basis in fact. Every new technology comes with risks; it’s how those risks are dealt with that determines how (or whether) we will prevail in changing the information ecosystem. A framing principle for the AI discussion was that while the challenges are real, these technologies also represent a huge opportunity, just as the internet itself—or the telegraph or electricity—once did. It’s a tool that news companies can turn to their advantage, if they approach it in a strategic way.
Below are ten key questions that emerged from the virtual discussion sessions and the panels and lightning talks at the conference, and how news publishers might think about them. (The meeting was held under the Chatham House Rule, but some participants agreed to be quoted.)
- What are we trying to save?
Gina Chua, the executive editor of Semafor, asked the attendees to think about what it is that they are trying to save when they think about the risks of AI. Is it the journalism business and the jobs of individual journalists, or is it the practice and impact of journalism on society, regardless of who is doing it? “We’re not here to save journalists,” she said. “We’re here to save the public space, the public civic information space. My question is: How do we make information better for people and for communities?”
- Sue or license?
Some publishers (including the New York Times, The Intercept, and eight newspapers owned by Alden Global Capital) have chosen to sue AI companies for copyright infringement, while others (including News Corp, the Associated Press, Axel Springer, the Financial Times, Dotdash Meredith—and, as of yesterday, The Atlantic and Vox Media) have signed licensing deals with such companies. Publishers that have chosen the licensing route say they have done so because they want an opportunity to help influence how their content is attributed by these new AI products. Those who are suing believe they need to set a fair value for their journalism. But some of the expert participants argued that the legal route has a much lower chance of success than many may be hoping, thanks to the nuances of US copyright law.
- Is disintermediation the future?
The broader issue behind the fear of copyright infringement and the decision to license content is the prospect that AI companies will disintermediate (or take the place of) news outlets and journalists even more than social media platforms have already. Technology companies claim to respect journalism and to want to help it, including by linking to original sources, but some publishers fear that clickthroughs (if they appear at all) may not be enough—and that AI companies and platforms will deliver everything a news consumer needs, using information scraped from the labor-intensive output of publishers. Will readers visit publishers’ websites or apps if a fast, free, and easy-to-use AI aggregator can give them similar information?
- What’s your North Star?
Whichever route you choose on the copyright question, our experts say you need to define the terms of your engagement with AI. This includes defining what is not up for grabs—the principles on which you will not budge, or that provide your company’s “North Star.” This could be the trusted relationship you have with your audience, your place as a news destination, or your ethical standards—such as a commitment that human editors and reporters will always be involved in whatever your audience receives, or a commitment to be transparent about your use of AI. Once you establish those guardrails, some of our panelists said, then everyone in the organization can feel more comfortable experimenting.
- What does the audience need?
Whether we like it or not, AI is going to change the way people find information, just as the arrival of the internet did. Our participants agreed that publishers should understand how AI is influencing audience behavior and test new ways of meeting news consumers’ expectations and needs—whether it’s by designing a chat interface that allows them to interact with a publisher’s own news content, training their own large language models, or experimenting with customized agent-based news delivery. In a broad sense, it’s important to focus on learning more about the changes that AI is bringing and adapt to them. “It’s easy to come up with a dystopian version of AI,” Mark Thompson, the chief executive of CNN, said, “but the idea of being better at meeting people’s needs…is really exciting.”
- What’s the low-hanging fruit?
Our panelists suggested that there are small-scale and/or low-risk AI experiments that almost any publisher can engage in right now. Speakers including Sonal Shah, the CEO of the Texas Tribune, talked about using AI for personalization, including the instantaneous translation of stories into multiple languages, turning text articles into audio, and summarizing long-form stories for social media. As Julie Pace, the executive editor of the AP, put it in one of our virtual discussion sessions, some potential uses of AI are so basic that “if we don’t take advantage of them, then shame on us.”
- How to engage in nonstop adaptation?
As CNN’s Thompson put it, those in the media industry often see technological change as a specific challenge that needs to be overcome, after which the industry will inevitably return to some kind of equilibrium. But that’s not how the news business works anymore. Participants agreed that it’s important to maintain an attitude of adaptability rather than just approaching each of the issues around AI as an individual problem to be solved. Successful media companies, many said, will be the ones that understand the need to constantly evolve, while staying true to what they do best.
- What does AI mean for trust?
AI-generated content often plays loose with the facts. Some of the panelists see this as an opportunity for news outlets to promote the high quality of their information based on what they expect will be a greater appetite for trusted sources, and to see trust as a valuable currency in the information ecosystem as it evolves. “As a news organization, our mission is to serve the greater good of our communities; at the heart of this value is our relationships with our audiences,” Lauren Fisher, senior vice president and chief legal officer at TEGNA, said. “And if we allow unchecked and uncorroborated AI-generated information to seep into our newsgathering and storytelling, that will erode the trust that is the foundation of those important relationships.” John Borthwick, of Betaworks, went so far as to say that general news and artificial intelligence should not mix at this point, “until AI has a better track record for attribution.”
- What are your strengths?
The consensus among participants was that media companies should identify their strengths and play to them. If you specialize in long-form investigative reporting, for example, use AI to deepen your skills and abilities in that area rather than trying to create viral TikTok videos. If you are a video-centric broadcaster whose key differentiator is trusted voices delivering the news, focus on tools that will broaden their reach and take care of rote tasks so that they have more time to do so. “Our subscribers come to us and pay us because we are out there getting the information that they need, and it is very often information that was not known before,” Alex Hardiman, chief product officer at the New York Times, said. “AI is not going to replace that.”
- Can AI help local news?
Many of the media executives who participated in the virtual discussion sessions and the conference agreed that AI tools have the potential to help resource-strapped local newsrooms provide trusted information to their audiences, by, for example, annotating and summarizing board meetings, videos, and government documents, and translating stories into different languages for different platforms. Where can readers get a COVID test? What is their tax rate? What did the school board decide on a controversial issue? All these questions and more could become easier to answer using AI—subject to editorial oversight, of course.
The correct response to the increased sophistication of AI, our expert participants concluded, is neither the unquestioning adoption of an unproven and in many cases poorly understood technology, nor the knee-jerk dismissal or unthinking criticism of new tools that could help the industry achieve its goals of bringing news to audiences in as efficient and timely a manner as possible. Publishers and journalists need to use whatever tools they can to make that happen—regardless of how steep the learning curve might be.
Other notable stories:
- As noted above, The Atlantic and Vox Media yesterday became the latest outlets to announce (separate) licensing deals with OpenAI, allowing the company to train its AI tools on their content and to use their content (with attribution) in response to users’ queries; in exchange, the outlets get access to OpenAI’s technology for use in the newsroom, among other perks. As various observers noted, The Atlantic announced its deal five days after publishing an op-ed in which Jessica Lessin, of The Information, chided media companies for capitulating to AI firms that “are simultaneously undervaluing them and building products quite clearly intended to replace them.” Yesterday, Damon Beres, who oversees The Atlantic’s tech coverage, explored the feeling that outlets like his “are making a deal with—well, can I say it? The red guy with a pointy tail and two horns?”
- The Washington Post’s Drew Harwell explores why the Biden administration rejected an “extraordinary” regulatory compromise proposed by TikTok, and instead signed off on a law forcing the app’s Chinese owners to sell up or face a ban in the US. (TikTok is challenging the new law in court, including on First Amendment grounds.) In 2022, TikTok pledged to let US officials pick its board in the country, to pay a Pentagon contractor to review its source code, and even “to give federal officials a kill switch that would shut the app down in the United States if they felt it remained a threat,” Harwell reports—and yet the administration deemed such promises to be inadequate, without ever publicly saying why. (We’ve covered the TikTok ban extensively in this newsletter.)
- In Monday’s newsletter, we wrote about the New York Times’ coverage of a pair of flags with ties to 2020 election denialism (among other connotations) that have flown in recent years at properties owned by the conservative Supreme Court justice Samuel Alito. The coverage led to widespread calls, including among Democratic members of Congress, for Alito to recuse himself from cases involving the insurrection at the Capitol on January 6, but yesterday, Alito said that he would not do so, insisting in a letter that his wife was responsible for flying both flags. (He denied that the flying of one of the flags had any connection to the insurrection, but, as the Times notes, did not do so in the other case.)
- The Dart Center for Journalism and Trauma at Columbia Journalism School launched a new online tool kit—in partnership with the Canadian Broadcasting Corporation and the Canadian Journalism Forum on Violence and Trauma—aimed at preparing “newsrooms, journalists, and educators for coverage of violence, conflict, and tragedy.” Journalists “are constantly covering horrible events and tragedies. Many newsrooms and journalists have had zero training or preparation,” Dave Seglins, a CBC journalist and Dart Center fellow who led on the project, said. “We hope these tools can help to change that.”
- And Washingtonian’s Omega Ilijevich profiled Pablo Manríquez, a congressional reporter for Vanity Fair who has developed a sideline as an artist since a canvas and some paint were mistakenly delivered to his home three years ago. He’s since painted—and sold—more than a hundred likenesses of prominent lawmakers, as well as portraits of fellow congressional journalists that hang in a Senate press area and a painting of Alexandria Ocasio-Cortez’s dog that has found its way to her office.
ICYMI: Omar Ferwati on Forensic Architecture’s probes of the present and the past
Has America ever needed a media defender more than now? Help us by joining CJR today.