Join us
The Media Today

Will a California bill cripple AI or make it better?

August 22, 2024
The California State Capitol in Sacramento. (AP Photo/Juliana Yamada)

Sign up for The Media Today, CJR’s daily newsletter.

At various moments last year, the question of how to regulate artificial intelligence took center stage. In March, more than a thousand technology leaders, researchers, and others—including Elon Musk, the billionaire who owns SpaceX and Tesla—signed an open letter calling for a moratorium on the development of AI because of the potential dangers it poses; the same month, dozens of scientists signed an agreement aimed at ensuring the technology can’t be used to create dangerous new bioweapons by recombining DNA. In July, seven of the leading AI companies—including Meta, Google, Microsoft, and OpenAI—met with President Biden and agreed to voluntary safeguards on the technology’s development. In November, the White House published an executive order to ensure the “safe, secure, and trustworthy development” of AI, and the British government held a two-day summit on AI safety at Bletchley Park, the site where code-breakers deciphered German messages during World War II.

In spite of all this, the US still doesn’t have a federal law aimed at regulating either the development or use of artificial intelligence technology. (Surprise!) But a wide range of state laws that apply to AI have either been proposed or passed. According to the Cato Institute, as of this month, thirty-one states have passed some form of AI legislation: regulating the use of deepfake imagery for sexual harassment or political messaging, for example, or requiring corporations to disclose the use of AI in their products and services, or that they are collecting data for training AI models. More than a dozen states have passed laws that prevent law enforcement agencies from using facial-recognition technology or other AI-assisted algorithms in the course of their work. Colorado recently passed a law that prevents companies from using the technology to decide who should receive a loan, insurance, or educational opportunities.

While these laws aim to regulate various uses of AI, only one piece of state-level legislation has so far grappled with the technology and its risks more broadly: California’s SB-1047. The bill was introduced in February by Scott Wiener, a Democratic state senator who represents a district in San Francisco; last week, it was approved by the California Senate’s Appropriations Committee. According to the New York Times, Wiener conceived of the bill after attending a series of “salons” in San Francisco last year, at which researchers, entrepreneurs, and philosophers discussed the future of AI; he says that he wrote the bill with input from the Center for AI Safety, a think tank that has ties to a movement known as “effective altruism,” which is concerned about AI threats. The bill—whose full name is the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act—has now gone to the full senate for approval. It could be signed into law by Gavin Newsom, the governor of California, within a matter of weeks.

Wiener has said that the California bill is an attempt to codify the sort of protections and safeguards that industry leaders have themselves advocated in various open letters and hearings. As TechCrunch has described it, the proposed law aims to prevent large AI models from being used to cause critical harms to humanity, such as a bad actor using an AI model to create a bioweapon that causes mass casualties, or to orchestrate a cyberattack. Among other things, the bill requires that developers of AI models of a certain size and power implement a fail-safe system, or “kill switch,” that can shut an engine down in the event of danger. The bill would also allow the California attorney general to seek an injunction against companies that release models that they consider unsafe (though it’s not clear how such a threshold might be met).

California’s proposed law has a number of supporters, including among technologists. One of them is Geoffrey Hinton, a professor at the University of Toronto who is widely viewed as one of the forefathers of modern AI and who quit working at Google last year because he was concerned about the risks inherent in AI models. But the legislation also has a number of critics—including, unsurprisingly, among technologists. Large technology companies such as Meta are among them, as are various leaders in AI technology, including Yann LeCun—who is Meta’s chief AI scientist—and Fei-Fei Li, known as the “godmother of AI.” LeCun has said that the bill would put an end to AI innovation, and that it is based on a “completely hypothetical science fiction scenario.” (I’ve written about the ongoing debate over the potential harms of AI, in which LeCun is a leading voice.) The CEO of Hugging Face, a leader in promoting open-source AI technology, has likewise called the bill a “huge blow to both California and US innovation.”

Jeremy Howard, an entrepreneur who helped to create the technology underpinning most leading AI engines, said that the bill would consolidate power in the hands of a few large corporations since only they can afford to abide by its regulations—a situation, Howard said, that would be a “recipe for disaster.” (Howard has written a blog post in which he goes into more detail about his concerns.) Sebastian Thrun, an AI researcher who founded the self-driving car project at Google, told the Times that AI is “like a kitchen knife, which can be used for good things, like cutting an onion, and bad things, like stabbing a person.” Governments shouldn’t try to “put an off-switch on the kitchen knife,” he added. “We should try to prevent people from misusing it.” Anjney Midha, a general partner at Andreessen Horowitz, a leading Silicon Valley venture capital firm, said the idea that AI models are going to “autonomously go rogue to produce weapons of mass destruction or become Skynet from The Terminator is highly unlikely.”

Sign up for CJR’s daily email

Nor is the opposition to the bill coming only from big companies and technologists—some politicians oppose it, too, including in California. Last week, eight Democratic members of Congress from the state—led by Zoe Lofgren, the ranking member of the House Committee on Science, Space, and Technology—wrote a letter to Newsom urging him to veto the bill, arguing that while they support AI regulation, the California bill goes too far. “Not only is it unreasonable to expect developers to completely control what end users do with their products,” the letter said, but it is “difficult if not impossible to certify certain outcomes without undermining the rights of end users,” including privacy. Nancy Pelosi, the former Speaker of the House, wrote that the legislation is “well-intentioned but ill-informed,” adding that while lawmakers want to protect consumers and society from the dangers of AI, the California bill is “more harmful than helpful in that pursuit.”

One criticism of the bill is that it would make the companies that develop AI engines liable for any harms that result from the technology, even though the nature of those harms is still poorly understood. The bill prohibits a developer from making an AI model of a certain size and power commercially or publicly available if there is “an unreasonable risk” that the model can “cause or enable a critical harm.” Critical harm is defined here as including the creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties; cyberattacks that cause at least five hundred million dollars in damage; or any acts that result in “death, great bodily injury, property damage, or property loss, and would, if committed by a human, constitute a crime specified in the Penal Code that requires intent.” But what constitutes an “unreasonable risk” is not defined.

The bill has been amended since its introduction, in response to industry criticism. It would no longer enable the attorney general to sue companies for negligent safety practices even before a dangerous event occurs, while the agency that monitors compliance would now be a unit of California’s Government Operations Agency, not a separate state entity. The law also no longer requires AI companies to certify their safety testing under penalty of perjury. Wiener told TechCrunch that he accepted “a number of very reasonable amendments” that were proposed by critics of the law, and that he believes the amended version has “addressed the core concerns.” But not everyone agrees: Martin Casado, a general partner at Andreessen Horowitz, said that the changes are “window dressing,” and that they ultimately don’t “address the real issues or criticisms.”

The central question raised by the bill, as Casey Newton put it in a recent edition of the Platformer newsletter, is this: “If an AI causes harm, should we blame the AI—or the person who used the AI?” Kelsey Piper, of Vox, wrote that car manufacturers are liable if their products are faulty and cause harm to the public, but that the makers of technologies such as search engines are not. Is an AI assistant more like a car or a search engine? Jeremy Nixon, the CEO of Omniscience, an AI startup, told TechCrunch that bad actors should be punished for causing critical harm, not the AI labs that openly develop and distribute the technology. “There is a deep confusion at the center of the bill, that LLMs can somehow differ in their levels of hazardous capability,” Nixon said. “It’s more than likely, in my mind, that all models have hazardous capabilities as defined by the bill.”

Most of the critics of the California legislation argue that it goes too far. But a few believe that it doesn’t go far enough. Gabriel Weil, a law professor at Touro University in New York, argues that the creators of AI engines should be liable for any harm that their technology creates, even if that harm wasn’t foreseeable when they developed the AI. And, if California is the first state to attempt to regulate AI holistically, the future of the technology is still at stake in other bills coming out of the state—including one that seems to be seeking to accelerate AI development, rather than rein it in.

As the AI bill has moved forward, different legislation—an attempt to get tech platforms to pay for journalism—has evolved to take on an AI flavor: Politico reported earlier this week that Buffy Wicks, a state legislator who has pushed the legislation (and who spoke with CJR about it earlier this year), was working on a deal that, as well as funding news, would see a public-private partnership involving the state, Google, and news publishers funnel cash into an “AI Innovation Accelerator” program, managed by a yet-to-be-created nonprofit entity. Yesterday, more details of the deal came to light. As with the wider world, though, the dangers of AI for journalism and the media industry, in particular, remain to be seen.


Other notable stories:

ICYMI: Can Legislation Save Journalism in California?

Has America ever needed a media defender more than now? Help us by joining CJR today.

Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.