Tow Center

AI companies have a news problem. Journalists have the skills they need to fix it.

September 10, 2024
An AI robot sits at a desk covered in papers, reading a document, in a room with shelves of books in the background.

Throughout the recent chaos in American politics, AI chatbots have struggled to stay abreast of the latest news. Reporting showed that, hours after the assassination attempt on former president Donald Trump, ChatGPT said the shooting was rumor and misinformation. The same reporting also found that popular chatbots struggled to provide accurate answers in the aftermath of J.D. Vance’s VP nomination, as well as President Joe Biden’s COVID diagnosis. After Biden’s withdrawal from the race, his endorsement of Vice President Kamala Harris, and the beginning of the Democratic National Convention, ChatGPT told me that while Harris was a “prominent figure” in the party, Biden remained the Democratic nominee seeking reelection. 

This problem occurs because most chatbots rely on months-old training data and do not have instant access to breaking news. Without that access in these critical moments, people are turning away from chatbots and to other technologies like Google search (which is experimenting with its own AI features). In response, OpenAI announced a “Google killer” prototype called SearchGPT that melds its chatbot with immediate internet search results. This move makes it clear that to fill the gap in the race to make generative AI the most dominant technology on our planet, companies have now created a battleground for access to real-time news. 

But how does a chatbot determine what new information on the internet is reliable enough to relay? It doesn’t. In reality, AI follows instructions based on policies created by people inside of companies tasked with solving the answers to this question. Currently, AI companies are taking a page out of the social media chapter of the history book on emerging technology and journalism. As a trained journalist and former senior policy official at Twitter and Twitch, I believe as we enter the age of AI, there is a pivotal window of opportunity for companies to avoid repeating mistakes of the past. In my view this should start with AI companies not only hiring more people with journalism training, but respecting and empowering them such that they can meaningfully contribute to the shaping of these nascent policies. And it continues with companies valuing the skill set of trained journalists and implementing their recommendations while developing new guardrails for the future.  

It is no secret that AI companies have been making deals with news outlets to gain access to their reporting. Last month, OpenAI announced a new deal with Condé Nast, and the company has inked contracts with institutions like the Financial Times, the Associated Press, and The Atlantic (while also being sued by multiple newspapers, including the New York Times, for allegedly illegally using previously published articles in the training data that powers ChatGPT). After facing allegations of plagiarism, Perplexity AI also recently announced revenue-sharing deals with publishers like Fortune, Time, and the Texas Tribune for the use of their content. 

While these deals may partially exist to gain legal access to precious new data desperately needed by AI companies to continue to train their powerful language models, there is another understated goal driving the agreements. As an executive at Perplexity reasoned, content from news outlets is rich in both “facts” and “verified knowledge.” Thus, these arrangements are forming the basis of a new technological era of the internet where legacy media organizations function as wire services for AI chatbots by providing trusted and timely information that can easily be algorithmically surfaced. 

It’s a model that mirrors the early days of verification on social media. Check marks were birthed on Twitter in 2009 as a way to signify the authentic identity of celebrities and notable figures after fake Shaquille O’Neal accounts prompted outcry and St. Louis Cardinals manager Tony La Russa sued the platform. Impersonation was such a problem that it even became a part of username nomenclature, like the notorious handle @realdonaldtrump.

Soon, the blue tick was being bestowed upon legacy media organizations and the journalists they employed. The algorithms that powered Twitter’s timeline also began to prioritize and amplify the content posted from verified accounts. So in a sea of endless 140-character groupings, verification was a badge that signaled trustworthiness and a buoy that increased the chance that information from established news outlets would float to the surface. 

It seemed like an easy fix. But then the media landscape changed, and people like me and my colleagues continued to grapple with the details a decade later. Sure, the Washington Post counted as a trusted media outlet. But what about Breitbart? Or Newsmax? Or the Epoch Times? What exactly makes a website a trusted news publication? These same questions went beyond just one company and also plagued the launch of Facebook News. 

As misinformation soared on social media, the rubric for how people inside these companies were answering these vital questions was constantly changing, applied unevenly, and mostly unknown to the public. The qualifications of the people making these decisions were likewise unknown. And while I can say that my classical training in school as a newspaper journalist gave me the skill set to be able to consider multiple viewpoints, interrogate information, and objectively distill facts, my background was unusual in the tech industry. This was not a recipe to endear public trust or foster safety and transparency in the midst of a fragile information ecosystem.  

Today’s AI arms race for news is occurring in an even more fractured information environment. And yet the same questions remain and lie ahead for the people working inside of AI companies. It is imperative that companies learn from history and don’t just follow the same playbook with the same people while expecting different results. I believe that as generative AI barrels forward, tech companies should hire and recognize the skill set of those best positioned to tackle these inquiries and write the policies to help make these decisions about news: trained journalists. But hiring trained journalists can’t just be a tokenized publicity effort. It is imperative that company executives value their expertise, listen to their concerns, and empower them to make crucial decisions. Once inside companies, those same journalists should then advocate to company leadership to share information with the public about how they make decisions about news that affects lives.

Generative AI’s break into the news business doesn’t have to be an existential threat. Rather, it can be an opportunity for a new pathway for trained journalists and journalism schools to use their unique skill set to help shape how information flows within the next generation of technology. That work will in turn construct the future we all inhabit. 

 

Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and the Consortium on Trust in Media and Technology at the University of Florida. She writes, researches, and lectures about the role of journalism and law within social media, artificial intelligence, trust and safety, and technology policy. For the past decade her work has spanned from law firms and think tanks to advocacy organizations and senior policy official positions at Twitter and Twitch.

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »