Sign up for The Media Today, CJR’s daily newsletter.
It’s been a big week for AI regulation—or at least, the idea of it. On Monday, the Biden administration published an executive order on “the safe, secure, and trustworthy development and use of artificial intelligence”; while AI has the potential to help solve a number of urgent challenges, the EO said, the irresponsible use of the same technology could “exacerbate societal harms such as fraud, discrimination, bias, and disinformation” and create risks to national security. Then, yesterday, the British government opened a two-day summit on AI safety at Bletchley Park, the site where code-breakers famously deciphered German messages during World War II. Rishi Sunak, the prime minister, said that AI will bring changes “as far-reaching as the Industrial Revolution, the coming of electricity, or the birth of the internet,” but that there is also a risk that humanity could “lose control” of the technology. And the European Union has been trying to push forward AI legislation that it has been working on for more than two years.
AI, and its potential risks and benefits, are at the top of many agendas at the same time. Yesterday, Vice President Kamala Harris, who is attending the Bletchley Park summit, gave a speech in which she rejected “the false choice that suggests we can either protect the public or advance innovation,” adding that “we can—and we must—do both.” Ahead of time, a British official told Politico that the speech would show that the summit was a “real focal point” for global AI regulation (even if, as Politico noted, it “may overshadow Bletchley a bit”). When it comes down to it, though, the US, the UK, and the EU are taking different approaches to the problem—differences that are, in many cases, the result of political factors specific to each jurisdiction.
In the US, the Biden administration’s order aims to put some bite behind voluntary AI rules that it released earlier this year—but it doesn’t go as far as an actual law because there’s no chance one of those would pass. That’s because Congress—as Anu Bradford, a law professor at Columbia University, told the MIT Technology Review—is “deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future.” Partly as a result, some observers have accused the White House of resorting to “hand waving” about the problem. The full executive order is over a hundred pages long; some of those are filled with definitions of terms that not every reader will be familiar with (“floating point operation”; “dual-use foundation model”), but there is also some rambling as to the potential of AI, both positive and negative.
To add the aforementioned bite, the Biden administration took the unusual step of invoking the Defense Production Act of 1950, a law that is typically used during times of national emergency but has been definitionally stretched in the past. Biden’s order relies on the law compelling AI companies to test their services and algorithms for safety, including through what is known as “red teaming,” whereby employees try to use a system for nefarious purposes as a way of revealing its vulnerabilities. Companies involved in AI research will have to share the results of such testing with the government before they release an AI model publicly, though that requirement will only apply to models that have a certain amount of computing power: more than a hundred septillion floating-point operations (them again), according to the New York Times. Existing AI engines that were built by OpenAI, Google, and Microsoft all meet that threshold. But a White House spokesperson said that the rules will likely only apply to new models.
The order also outlines rules aimed at protecting against the potential negative social impacts of AI: for example, it directs federal agencies to take steps to prevent algorithms from exacerbating discrimination in housing, benefits programs, and the criminal justice system (though how exactly they should do so is unclear). And it directs the Commerce Department to come up with “guidance” on how watermarks might be added to AI-generated content, as a way of curbing the spread of artificial disinformation. Critics, however, argue that asking for “guidance” could amount to very little: as the MIT Technology Review noted, there is currently no reliable way to determine whether a piece of content was generated by AI in the first place.
Kevin Roose, of the Times, has argued that the order looks like an attempt to bridge two opposing factions on AI: some experts want the AI industry to slow down, while others are pushing for “its full-throttle acceleration.” Those who fear the development of superhuman artificial intelligence—including the scientists and others who signed an open letter in March, urging a halt to AI research in apocalyptic (if brief) language—may cheer the introduction of new controls. Supporters of the technology, meanwhile, may just be happy that the order won’t require them to apply for a federal license to conduct AI research, and won’t force AI companies to disclose secrets such as how they train their models.
But as The Atlantic noted—and as with all approaches that aim to please competing constituencies—parts of the order “are at times in tension, revealing a broader confusion over what, exactly, America’s primary attitude toward AI should be.” And—as is also the case with such approaches—not every constituency was pleased. James Broughel, an economist at the Competitive Enterprise Institute, described the order as “regulation run amok,” arguing that it suffers from a “classic Ready! Fire! Aim! mentality” whereby it introduces invasive regulations without first grasping the nature of the problem it is trying to solve. Some of the requirements that sound positive, such as the need for transparency around safety testing, could end up being the opposite, Broughel argues, if they discourage AI companies from doing that kind of testing at all. The order is “not a document about innovation,” Steve Sinofsky, a former Microsoft executive, wrote. “It is about stifling innovation.”
Whether the order achieves anything tangible remains to be seen, but it is at least a timely topic of conversation for the UK’s AI Safety Summit. Yesterday, Michelle Donelan, Britain’s current technology minister (and past marketer for WWE wrestling), released a policy paper called “The Bletchley Declaration” and pledged that the summit will become a regular global event, with future editions already slated to be held in South Korea in six months and then in France. The declaration states that “for the good of all, AI should be designed, developed, deployed, and used in a manner that is safe [and] human-centric, trustworthy and responsible.” It’s hard to disagree, but some observers saw the event in less grand terms: as an attempt by Sunak to boost his flagging popularity ratings at home. Writing for The Guardian, Chris Stokel-Walker described the summit as the passion project of a prime minister desperate for a good-news boost as “his government looks down the barrel of a crushing election defeat.”
Attendees at the summit include executives from Tencent and Alibaba, two Chinese tech giants. Their invites were contentious because of suspicions about China’s motives in the realm of AI. At the summit, Chinese scientists signed a statement referring to AI technology as an “existential risk to humanity.” As such portents of AI doom multiply, some experts believe that they could accelerate over-regulation—and that this in turn could benefit large incumbents in the AI space, rather than innovators. In a post on X, Yann LeCun, a noted AI expert who now works for Meta, argued that such statements give ammunition to voices lobbying for a total ban on AI research. LeCun argues that this will result in “regulatory capture” and a small number of companies from the US West Coast and China controlling the industry.
While all this has been going on, the EU has been working to finalize its AI Act—one of the world’s first pieces of legislation targeted specifically at AI—which proposes rules around everything from the use of the technology to design chemical weapons to the use of copyrighted content to train AI engines, something that authors and other groups are currently suing over (as my colleague Yona TR Golding and I have written recently). The law as drafted also requires companies with AI engines to report their electricity use, among other measures. And it separates AI companies and services into different categories based on the risk they pose. Some European lawmakers said that they hoped the bill would be finalized by the end of this year and adopted in early 2024, before the next European Parliament elections in June. But, as The Verge notes, some EU countries are still not in agreement on parts of the law, and such expeditious passage thus looks unlikely.
In their haste to detail the long-term risks of AI, both the Biden order and the EU’s proposed AI Act have also been accused of overlooking important points about current dangers. The Atlantic notes, for example, that the Biden order mentions how AI technology could help mitigate climate change—but not that large AI engines consume immense quantities of water. Another risk that doesn’t appear anywhere in the US order is the potential for AI deepfakes that could manipulate elections. Stefan van Grieken, the CEO of the AI firm Cradle, told CNBC that this is akin to a conference of firefighters that talks only about dealing with “a meteor strike that obliterates the country.” Representatives from dozens of civil society groups, meanwhile, wrote an open letter arguing that the UK summit has excluded the workers and communities that will be most affected by AI.
Others likely see the US and EU efforts as excessively stuck in the mud. Last month, Marc Andreessen, a prominent Silicon Valley venture capitalist who has invested in OpenAI, wrote an essay in which he argued that, because AI has the power to save lives, any deceleration of research will end up costing lives. These preventable deaths, Andreessen argued, are “a form of murder.” It may be hard to determine exactly where to situate US, UK, and EU views about AI on the spectrum from existential disaster to unparalleled opportunity. But thanks to Andreessen, we now know where the outer bounds lie.
Other notable stories:
- Reporters Without Borders filed a complaint with the International Criminal Court alleging that journalists have been the victims of war crimes during the conflict between Israel and Hamas; the complaint focuses on the killings of eight Palestinian journalists in Israeli air strikes on Gaza and of an Israeli reporter who was covering the Hamas attack on a kibbutz on October 7, and also cites “the deliberate, total or partial, destruction of the premises of more than 50 media outlets in Gaza.” Meanwhile, US officials warned Israel to keep the internet online in Gaza after it went down for several hours yesterday morning. (Israel previously took down communications in Gaza for much of the weekend.) And the BBC World Service set up an emergency radio service in Gaza.
- This week, the Supreme Court heard oral arguments in a case revolving around whether public officials have the right to block commenters on their social media accounts. In other political-media news, Jeff South, a journalism professor in Virginia, weighed in, for The Conversation, on how the media has been covering elections that will take place in the state next week—and lessons for national journalists ahead of 2024. And Ken Buck, a right-wing Republican congressman who has rejected Trump’s election denialism, said that he won’t run for reelection, and warned that “too many Republican leaders are lying to America.” Reports have suggested that Buck could join CNN as a commentator.
- In media-business news, Condé Nast said that it will lay off roughly 5 percent of its workforce, a figure equivalent to nearly three hundred employees, citing various revenue and audience challenges. Elsewhere, Thomson Reuters, which owns its namesake news agency in addition to other professional services, reported higher profits than expected in the third quarter. And the Voice of San Diego’s Andrew Donohue wrote about “the beginning of the end” for the local Union-Tribune newspaper, which has already been hit by cuts since Alden Global Capital acquired it earlier this year.
- In international press-freedom news, PEN America is out with a new report chronicling a worrying regression of free expression, including press freedom, in the country of Georgia. Elsewhere, the Committee to Protect Journalists reported that at least twenty-seven journalists have been assaulted by protesters and police while covering political turmoil in Bangladesh. And Apple warned journalists and opposition politicians in India that their phones have been targeted by state-sponsored hackers.
- And New York’s Kevin T. Dugan described the experience of covering the fraud trial of the fallen crypto mogul Sam Bankman-Fried from overflow rooms in the courthouse, which fill up when the main courtroom is full. “Honestly, it is great,” Dugan writes. “These rooms—filled with crypto die-hards, lawyers, interested normies, and disinterested weirdos—are closer to a real-time studio audience, and they’re where the real action is.”
ICYMI: David Marchese on the art of the interview
Has America ever needed a media defender more than now? Help us by joining CJR today.