California Governor Gavin Newsom vetoed a controversial artificial intelligence (AI) regulation bill approved by the state legislature in August, on Sunday (September 29). Most tech companies had opposed the bill, concerned that it could drive AI companies out of California, stifling innovation and research.
The bill, named the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (SB 1047), aimed to regulate powerful AI models. It required developers of large “frontier” AI models to take preventive measures, such as pre-deployment testing, to ensure they do not cause mass casualties, attacks on public infrastructure, or be used for cyberattacks.
Furthermore, the legislation planned to establish whistleblower protections for employees of AI companies wishing to report security concerns, as well as to develop public cloud services for AI in the public interest. The bill also called for the creation of the Board of Frontier Models in California to oversee the development of AI models.
Newsom’s comprehensive review of the bill before vetoing it highlighted concerns that the legislation might “restrict innovation, which is the driving force behind public welfare advancement.”
He pointed out that the bill regulated AI in a one-size-fits-all manner and lacked empirical analysis of the actual threats posed by AI.
Newsom expressed, “Crafting regulations specific to California might make sense—especially when Congress has not taken federal action—but those actions must be grounded in evidence and science.”
Moreover, Newsom raised doubts about the bill primarily applying to expensive AI models. The legislation sought to hold developers accountable for damages caused by their models but only for models with training costs exceeding $100 million, a figure not currently met by any models.
Newsom emphasized that even low-cost AI models could pose threats to public welfare or critical infrastructure, stating, “Smaller, more specialized models may be equally, if not more, dangerous compared to the models targeted by SB 1047.”
He believed that the cost of the legislation could impede the progress of public welfare endeavors and create a “false sense of security” in AI matters for the public.
Newsom stated that California must enact regulations on artificial intelligence, and he pledged to continue collaborating with legislative bodies, federal partners, and stakeholders to “find the right path forward, including legislation and regulation.”
“California will not shirk its responsibility,” he added, emphasizing the need for proactive prevention measures and clear and enforceable consequences for bad actors.
In recent months, both supporters and opponents of the bill had exerted significant pressure on Newsom.
Supporters included Elon Musk, AI startup Anthropic, the Center for AI Safety (CAIS), nonprofit organization Encode Justice, the National Organization for Women (NOW), and whistleblowers from AI company OpenAI.
Last week, over 120 Hollywood figures wrote an open letter urging Newsom to sign the legislation, expressing concerns that the most powerful AI models could soon pose serious risks.
According to an online poll conducted by the Artificial Intelligence Policy Institute from August 4 to August 5 among 1,000 California voters, 65% of respondents supported SB 1047, with a margin of error of 4.9 percentage points.
Opponents of the bill included tech giants such as Google, Meta, and OpenAI, who argued that the legislation would harm California’s economy and AI industry.
They contended that the bill aimed to hold developers accountable rather than punishing misuse of AI models and believed that federal agencies should establish AI technology safety regulations.
Even federal lawmakers joined the debate, with former Speaker of the House Nancy Pelosi and other California political figures opposing the bill.
Last month, Pelosi remarked that many in Congress believed the legislation was “well-intentioned but lacked thorough understanding.”
Earlier this month, Newsom had signed a series of bills aimed at preventing AI abuse, strengthening protection of digital rights for actors and performers, and combating deepfake technology frequently used in political advertising.