Join us
Tow Center

From ‘Mitigating Risks’ to Ensuring ‘Industry Leadership’

How Trump’s second term is expected to affect AI regulation.

November 27, 2024
Image: Adobe Stock, Wooden Gavel on Reflective Surface with Glowing Code Background Representing Law and Technology

Sign up for The Media Today, CJR’s daily newsletter.

The 2024 US election arrived at a pivotal moment for the artificial intelligence industry, as debates over copyright infringement, user privacy, job displacement, and escalating geopolitical tensions with China continue to shape the technology’s role in society. And while the federal regulation of AI companies may not have been front of mind for many voters on November 6, the incoming Republican-controlled government will have the power to massively affect the future of a rapidly growing and increasingly disruptive industry.

In particular, Donald Trump’s return to the White House is expected to usher in a distinctly pro-business era in AI regulation, which replaces the Biden administration’s focus on corporate accountability and digital safety with a commitment to industry self-regulation and competition with China.

In his 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” President Joe Biden established a variety of guidelines and reporting requirements intended to promote responsible AI development and standardize safety testing within the largest AI companies.

“Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure,” the Biden White House said in the introductory section of the executive order. “At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.”

Among other measures, the executive order requires that AI developers notify the federal government when training models that might pose “a serious risk to national security, national economic security, or national public health and safety” and that they share the results of safety tests with regulators. And while the wide-ranging order highlights the importance of protecting Americans’ civil liberties and First Amendment rights in the age of AI, it has been maligned by Trump and other Republicans as a partisan effort to stifle technological innovation and hinder political speech online.

“When I’m reelected, I will cancel Biden’s artificial intelligence executive order and ban the use of AI to censor the speech of American citizens on day one,” Trump told a crowd of supporters in Cedar Rapids, Iowa, on December 2, 2023. 

It is unclear what aspect of Biden’s order Trump sees as “censoring” expression, but the sentiment was echoed in the Republican Party’s official policy platform, which was published in July of this year. The platform calls Biden’s order “dangerous” and claims that it “imposes Radical Leftwing ideas on the development of this technology.” In its place, the platform promises to “support AI Development rooted in Free Speech and Human Flourishing.”

Assuming the executive order is reversed, it’s unclear how—and how quickly—that decision will affect the many downstream AI-safety-oriented initiatives currently in the works across the federal government. Just last week, the US AI Safety Institute (another effort established as a result of Biden’s order) hosted the first meeting of an international network of research and regulatory bodies representing countries all over the world. 

After the event, Bloomberg reported that “supporters of the US institute are holding out hope that Trump won’t gut the organization,” despite an uncertain future. “I don’t care what political party you’re in. This is not in Republicans’ interest or Democrats’ interest,” Gina Raimondo, US commerce secretary, was quoted as saying. “It’s frankly in no one’s interest anywhere in the world, with any political party, for AI to be dangerous or for AI to get in the hands of malicious nonstate actors that want to cause destruction and sew [sic] chaos.”

AI safety advocates may be heartened by the fact that there are several bills currently awaiting congressional approval that would help codify many aspects of Biden’s executive order into law. For example, the AI Advancement and Reliability Act (H.R. 9497) and the Future of Artificial Intelligence Innovation Act (S. 4178) aim to reinforce the AI Safety Institute’s role, and would likely limit Trump’s ability to weaken or disband it. However, both bills remain in their early stages, and the impending shift to a Republican-led Congress may impede their chances of advancing.

The Cabinet’s Take

Trump’s priorities on AI seem to be shared by many of his recently announced cabinet picks, despite their seemingly varied records on tech regulation and complex relationships with Silicon Valley.

Elon Musk, the OpenAI cofounder who is expected to cochair the so-called Department of Government Efficiency, has at times publicly supported safety testing requirements for AI companies, but has more recently criticized developers (including those at OpenAI) for employing technological guardrails designed to make their products safer and less likely to generate offensive content. Musk has characterized these safety measures as the “woke” and “politically correct” efforts of what he sees as an inherently biased, “San Francisco Bay Area” philosophy inside of AI companies.

A similar sentiment has been expressed by Trump’s pick for FCC chairman, Brendan Carr, who contributed a chapter on the agency to the “Project 2025” transition agenda. Like Musk, Carr has long railed against digital content moderation and fact-checking initiatives put forth by both media outlets and tech companies, which he claims share a bias against conservative viewpoints. On November 15, Carr posted a screenshot of a letter addressed to the CEOs of Facebook, Google, Apple, and Microsoft accusing their companies of contributing to a “censorship cartel” that has “silenced Americans for doing nothing more than exercising their First Amendment rights.”

It’s also worth noting that the second Trump administration is expected to fire Biden-appointed FTC chairwoman Lina Khan, who was tapped for the role based on her expertise in antitrust law. According to an analysis published by the Brookings Institution, Khan’s likely departure “means that acquisitions by and of AI companies that might have been blocked on antitrust grounds under a Democratic president will be more likely to proceed unimpeded.” 

Trump’s history, AI’s future

Further hints as to how Trump is likely to approach AI regulation during his second term can be found in his own executive order on the subject, issued in 2019. In contrast to Biden’s, Trump’s focused on fostering “American leadership” in the industry by reducing legal or regulatory “barriers” to the development and deployment of new AI tech. Presumably, this would likely include legislative measures intended to encourage rigorous safety testing.

Trump’s order reads, “The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.”

In an interview with NextGov/FCW, an unnamed former Trump White House official said he expects Trump to double down on this effort during his second term, given the president-elect’s stated interest in asserting dominance over China in the AI industry and tech space broadly.

In a 2023 piece for the Harvard Business Review, venture capital investor Hemant Taneja and political commentator Fareed Zakaria described the US and China’s AI rivalry as one aspect of a looming “Digital Cold War” in which “other states will need to decide which sphere they want to be part of.”

“We are at a fork in the road when it comes to AI,” the authors write. “We can go down the path that leads to automation and destruction, replacing human work and meaning, or we can go down the path that leads to copiloting and enablement, making us more productive, helping us live more balanced lives, and becoming greater masters of our craft.”

As the new administration navigates this high-stakes rivalry, its choices will shape not only America’s technological future but also the balance of power in an increasingly volatile geopolitical landscape. The next four years will likely see the US deepen its role in this Digital Cold War, with Trump favoring global dominance over technological caution and corporate accountability.

Has America ever needed a media defender more than now? Help us by joining CJR today.

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »