Join us
The Media Today

An AI Chatbot Joins Time 

The magazine calls it ‘a pivotal step toward charting the future of journalism.’

December 19, 2024
Photo: Adobe Stock

Sign up for The Media Today, CJR’s daily newsletter.

Last week, Time continued its nearly century-old annual tradition of choosing a Person of the Year, or an individual who, “for better or for worse,” has had the biggest impact on the world over the previous twelve months. Donald Trump, who the magazine said has perhaps played the largest role of any individual in changing the course of politics and history, was Time’s pick for 2024. Alongside its big reveal, the magazine launched TIME AI: a platform that consists of generative AI tools meant to assist people as they read the story. It’s a move that Time optimistically describes as setting a new standard for immersive storytelling. “It’s more than an experiment,” a Time announcement proclaimed. “It’s a pivotal step toward charting the future of journalism.”

Time’s new AI platform can answer readers’ questions about the current and previous three Person of the Year stories, summarize them at different lengths, and translate them into several languages. According to Time, the tool was developed through partnerships with AI companies, including Scale AI, and trained on a curated body of the magazine’s articles, other “trusted” sources, and the bot’s built-in general knowledge. The chatbot is instructed not to answer questions outside the realm of the magazine’s reporting. Indeed, the scope of what users can ask seems particularly narrow (when I asked for the capital of Egypt, the bot refused to answer). In other words, Time seems to be treading carefully so as not to let its bot run amok into misinformation territory, which surely would be particularly bad for an industry that sells itself on accuracy.

I reached out to Meg Heckman, an associate professor at the Northeastern University School of Journalism and Media Innovation, who reported for CJR when some local newsrooms launched chatbots in 2018. “Injecting humility into a bot is not a bad thing,” she said. “Especially for a news organization that is attempting to rebuild very fragile audience trust.” Indeed, as Heckman previously noted in CJR, the stakes are particularly high when bots speak for organizations that bill themselves as trustworthy sources of information, as opposed to, say, a shopping site. That’s why newsrooms that decide to test the waters with AI tools need to be overly transparent about where the data is coming from and remind readers that generative AI is not omnipotent, according to Heckman. “There’s a role for all newsrooms to engage in some AI literacy for their readers,” she said. “If you’re going to be putting this new AI voice on your website, have a couple of paragraphs somewhere explaining what it is.” To its credit, Time did publish a somewhat detailed article about how the AI chatbot works. 

Of course, Time is far from the only major newsroom to experiment with AI chatbots. Last month, the Washington Post launched “Ask the Post AI,” which produces answers based on the newspaper’s coverage. Like TIME AI, the Post’s chatbot has guardrails to prevent its answers from straying too far from its source material. When I asked “What was Trump’s first public appearance after the 2024 election?” it declined to answer my question. “This product is still in an experimental phase,” the bot explained. But when I asked the same question later in our conversation, the bot seemingly changed course: “Trump’s first public appearance after the 2024 election was at the New York Stock Exchange,” it said—confidently, but incorrectly. The Post has been experimenting with several AI tools during the past several years, launching a climate bot called “Climate Answers” in July. Vineet Khosla, the Post’s chief technology officer, told Axios that the move is part of a strategy to engage young readers who, according to research conducted by the paper, often rely on story summaries rather than headlines to determine whether to read further.

With AI companies playing an increasingly large role as information providers—partly by remixing news articles into text that barely skirts copyright law—newsrooms have been under pressure to shift their business model. Some have leaned into the technology, for instance, by launching their own chatbots or joining licensing deals that let AI companies train their models on the publication’s articles. (Of course, some publications—among them, notably, the New York Times—have taken a different route, choosing to sue these companies over copyright infringement.) The Tow Center’s research director, Pete Brown, who has been tracking these licensing and partnership deals, found that OpenAI has reportedly spent three hundred million dollars on them (a five-year deal between OpenAI and NewsCorp, which owns publications like the New York Post, is valued at two hundred and fifty million dollars). By writing checks, companies like OpenAI have been accused of being able to “pick winners” in the news industry. Often, these “winners” include large, mainstream news companies like Time, which currently has partnerships with several AI companies, including OpenAI and Perplexity. 

And yet, even when participating in licensing and partnership deals, newsrooms aren’t guaranteed that their content is presented or cited accurately in AI-generated answers, according to new research from the Tow Center. In October, OpenAI launched an initiative called ChatGPT Search: a search engine similar to Microsoft’s Bing that provides answers and links to relevant Web sources based on the user’s question. A press statement by OpenAI said the tool was developed in collaboration with news providers and quotes outlets like Le Monde praising the tool as a way for publications to innovate. However, research by my colleagues Klaudia Jaźwińska and Aisvarya Chandrasekar found that publishers face the risk of their content being misattributed or misrepresented regardless of whether they allow OpenAI to include their content in search results. “Our initial experiments with the tool have revealed numerous instances where content from publishers has been cited inaccurately, raising concerns about the reliability of the tool’s source attribution features,” the analysis found.

This points to a flawed relationship between tech companies and publishers—a relationship that often benefits the former and ties the hands of the latter. When newsrooms build their own chatbots, it at least gives them some control back over how their content is attributed and cited. Unlike ChatGPT Search, Time’s and the Post’s chatbots seem more upfront about what they don’t know and can’t address. At the same time, it seems counterintuitive for newsrooms to offer summaries of their own articles, allowing readers to skip over details and original reporting that could provide important context to the story. For now, according to Heckman, it remains unclear whether TIME AI will indeed be a “pivotal step for charting the future of journalism”—or just another temporary product that will fade away once newsrooms realize readers rarely use them. “We don’t know yet how appealing it’s going to be,” she said. “I mean, this could be a flash in the pan.”

Sign up for CJR’s daily email

Has America ever needed a media defender more than now? Help us by joining CJR today.

Sarah Grevy Gotfredsen is a computational investigative fellow at the Tow Center for Digital Journalism at Columbia University. She works on a range of computational projects on the digital media landscape, including influence operations conducted through news media and the information ecosystem. She graduated from Columbia University in 2022 with an MS degree in data journalism.