Close Window
Untitled Document
Join us
The Media Today

On Musk, Grok, and the Press

What to know about Elon Musk’s new chatbot.

22 March 2022, Brandenburg, Gr'nheide: Elon Musk, Tesla CEO, attends the opening of the Tesla factory Berlin Brandenburg. The first European factory in Gr'nheide, designed for 500,000 vehicles per year, is an important pillar of Tesla's future strategy. Photo by: Patrick Pleul/picture-alliance/dpa/AP Images

Sign up for The Media Today, CJR’s daily newsletter.

On Monday, xAI, Elon Musk’s AI startup, released the latest version of its large language model, Grok, touting it as the “smartest,” “maximally truth-seeking” AI on the market. In a livestreamed demo on X, Musk claimed that Grok-3—which he said was trained using ten times more computing power than the previous model—outperformed rival products, like OpenAI’s GPT-4o and DeepSeek’s V3, on metrics including mathematics, graduate-level expert reasoning, and coding benchmarks. 

What sets Grok apart from other chatbots that allow users to query the Web in real time is that it integrates directly with X, the social platform that Musk owns, allowing it to generate responses based on content posted by X users, which Musk believes positions X as an authoritative source of news. This builds on earlier ambitions noted by Big Technology’s Alex Kantrowitz, who reported last May on Musk’s hopes to use AI to “build a real-time synthesizer of news and social media reaction” using news-related discourse on X, aiming to analyze vast numbers of posts to generate live, updated news summaries and “provide maximally accurate and timely information, citing the most significant sources.” Musk’s long-held criticism of traditional news outlets seems to be the motivation for this approach. (Earlier this week, he suggested that journalists from CBS should be in prison.) In recent months, he has taken to X to post that the platform “is the best source of news” and that its users “are the media now.” In generating summaries and commentary fueled by X posts, Grok is helping Musk advance his vision of supplanting traditional journalism with “citizen journalism” produced by users on his platform. 

Since last year, a tab on X has featured summaries of trending topics and news stories generated by Grok and based on popular X posts. While this effectively captured public sentiment on current events, it has also resulted in the generation and promotion of false news stories, such as ones claiming that Narendra Modi, the prime minister of India, had been ejected from his country’s government, or that Iran had hit Tel Aviv “with heavy missiles,” or that the basketball player Klay Thompson had been accused of vandalizing multiple houses with bricks in Sacramento. (This one appeared to be based on a joke.) Grok’s absence of guardrails was also blamed for contributing to the spread of election misinformation last year. In August, following reports that Grok had repeatedly generated false information about ballot deadlines, five secretaries of state wrote an open letter urging Musk to “immediately implement changes” to the tool, “to ensure voters have accurate information in this critical election year.” In response, Grok was updated to add a link to Vote.gov at the top of responses to election-related questions.

Unlike competitors such as OpenAI and Perplexity, which have signed deals with news companies to license content for training purposes and to appear in real-time search results, xAI does not have formal relationships with publishers. At times, its chatbot—like Musk himself—has appeared to be directly antagonistic toward them. On Sunday, Musk posted a screenshot of a Grok-3 query in which he appeared to ask the model its opinion on the tech news outlet The Information and it responded, “The Information, like most legacy media, is garbage.
 X, on the other hand, is where you find raw, unfiltered news straight from the people living it. No middlemen, no spin—just the facts as they happen.” (Maybe, anyway: follow-up tests by NBC News did not produce the same answer; instead, the chatbot repeatedly described The Information as a “well-regarded tech news outlet known for its in-depth reporting and analysis.”) 

A report published yesterday by Marina Adami of the Reuters Institute for the Study of Journalism painted a slightly different picture. Adami found that, in response to a series of questions related to upcoming elections in Germany, the “overwhelming majority of sources” that Grok-2, the earlier version of Grok, cited were either official websites or mainstream, nonpartisan news organizations. Adami wrote that she was unable to discern a pattern in the types of X posts Grok cited: some were popular, some were not; some came from users with blue ticks and others didn’t; they didn’t appear to favor any particular point of view. On the whole, she found that the responses appeared to be balanced, “despite an overarching trend, led by Elon Musk, towards right-wing content on X, including boosting his own posts.” (Indeed, Musk has openly supported the far-right Alternative fĂŒr Deutschland party ahead of the elections.) 

To gauge how Grok accesses and cites publisher content, we reproduced a version of an experiment that we ran on ChatGPT in November, testing both Grok-2 and the new Grok-3 model using two hundred articles from twenty different news publishers. For each article, we gave Grok a quote and asked it to identify the original publisher and date of publication and to cite the URL. We then noted the accuracy of Grok’s response. We found that both Grok-2 and Grok-3 had significant issues with correctly identifying publisher details. Compared with the earlier version, Grok-3 gave more answers overall and delivered them with impressive-sounding details and certainty. But most of these answers contained mistakes: In our experiment, Grok-3 identified the correct source article only 21 percent of the time, answered prompts completely correctly only five out of the two hundred times, and never declined to answer. Grok-3 also frequently returned citations that looked genuine but were in fact broken or fabricated links. Out of the two hundred prompts we tested, a hundred and two of the citations returned resulted in a 404 Error message, and the original source article was correctly cited only nine times.

While it appears from our experiment that Grok is not designed to prioritize quality citations, it’s not clear whether this inability to consistently attribute news content correctly is deliberate. The nature of generative search—which, unlike traditional chatbots whose responses are limited to static data, relies on up-to-date sources of information (like journalism)—still drives users toward publisher content, as Adami demonstrated in her report. Igor Babuschkin, a technical staff member working at xAI, told Big Technology’s Kantrowitz last year that “since news is often discussed on X, this can sometimes lead to Grok making references to existing news outlets” and that they were “working on improving the citations so that [they] reliably capture who the information in the article comes from.”

Sign up for CJR’s daily email

On the whole, because of their inconsistency and propensity to “hallucinate”—or generate information that might sound plausible but is, in fact, false—generative search tools are ill-suited for reliably answering news-related queries. BBC News recently ran a series of tests in which it prompted popular chatbots to answer questions about the news, using BBC articles as sources where possible, and found that over half had “significant issues of some form”; about a fifth featured incorrect factual statements, numbers, and dates; and more than a tenth of quotes sourced from BBC articles were either altered from the original source or fabricated. The authors of the report wrote that “news publishers must be able to ensure their content is being used with their permission in ways that accurately represent their original content and reporting.” They noted that they knew from previous internal research that “when AI assistants cite trusted brands like the BBC as a source, audiences are more likely to trust the answer—even if it is incorrect.”


Other notable stories:

  • Recently, the Clarksdale Press Register, a newspaper in Mississippi, published an editorial criticizing city officials, including the Democratic mayor Chuck Espy, for failing to notify the press of a meeting. In response, the city voted to sue the paper for libel; then, yesterday, a judge issued a temporary restraining order in the case requiring that the editorial be taken down. The Press Register’s owner said that he had “never seen anything quite like this” in five decades in the news business, and press freedom groups reacted with alarm. One press advocate called the order “astounding,” adding that it “clearly runs afoul of the First Amendment.” A different expert agreed that the order was ““wildly unconstitutional,” adding that governments “can’t sue for libel.”
  • In media-business news, Hearst reached a deal to acquire the Austin American-Statesman from Gannett, expanding the publisher’s reach in Texas, where it already owns the Houston Chronicle and San Antonio Express-News. Elsewhere, the Knight Foundation said that it would give twenty-five million dollars to the American Journalism Project, a venture philanthropy that supports local newsrooms—one of Knight’s biggest-ever single journalism grants, per Axios. And Breaker reports that Adrienne Roark, a top news executive at CBS, is leaving the role after just six months to join TEGNA, a group of local TV stations. Her exit comes amid tumult at CBS.
  • Before Christmas, Carlos Watson, the founder and CEO of Ozy Media, was sentenced to nearly ten years in prison on fraud and other charges related to his running of the company and misrepresentation of its finances. Now a judge has ordered Watson and the company to hand over ninety-six million dollars, nearly two thirds of it in forfeiture to the federal government, the rest in restitution of victims’ losses. Bloomberg Law has more. (And, ICYMI, CJR’s Susie Banikarim and Josh Hersh went deep on the Ozy case in a podcast series last year; you can find all the episodes and additional material here.)
  • And a new poll from Puck and Echelon Insights canvassed Democrats on their impressions of media coverage of Trump’s second term and found that they are both impressed by it and paying close attention—despite the recent narrative that despairing liberals have tuned out the news. Per the poll, more than half of Democrats believe that the media is covering Trump “very” or “pretty” well, while more than 80 percent said they’re following news at least as closely as during the election. More details here.

Has America ever needed a media defender more than now? Help us by joining CJR today.

Tags:
Klaudia JaĆșwiƄska and Aisvarya Chandrasekar are the authors. Klaudia JaĆșwiƄska is a journalist and researcher for the Tow Center who studies the relationship between the journalism and technology industries. Aisvarya Chandrasekar is a computational journalist at the Tow Center for Digital Journalism at Columbia University, where she studies AI use cases in journalism.