California Governor Gavin Newsom vetoed SB 1047, which aims to prevent bad actors from using ai to cause “critical harm” to humans. The California state Assembly passed the legislation by a 41-9 margin on August 28, but several organizations, including the Chamber of Commerce, had technology/u-s-chamber-urges-governor-newsom-to-veto-california-senate-bill-1047″ rel=”nofollow noopener” target=”_blank” data-ylk=”slk:urged Newsom to veto the bill;cpos:1;pos:1;elm:context_link;itc:0;sec:content-canvas” class=”link “>urged Newsom to veto the bill. in your veto message On Sept. 29, Newsom said the bill is “well-intentioned” but “does not take into account whether an ai system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data.” . “It applies strict standards to even the most basic functions, as long as it is implemented by a large system.”
SB 1047 would have made developers of ai models responsible for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures like testing and external risk assessment, as well as an “emergency shutdown” that would completely shut down the ai model. A first violation would cost a minimum of $10 million and $30 million for subsequent violations. However, the bill was revised to eliminate the state attorney general's ability to sue ai companies for negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm.
This law would apply to ai models that cost at least $100 million to use and 10^26 FLOPS to train. It would also have covered derivative projects in cases where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill's focus on large-scale systems, Newsom said, “I don't think this is the best approach to protecting the public from the real threats posed by technology.” The veto message adds:
By focusing only on the most expensive and largest-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, more specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047, at the cost of restricting the very innovation that drives progress for the public good.
The previous version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was amended before a committee vote to put governance in the hands of a Border Models Board within the Government Operations Agency. The nine members would be appointed by the governor and the state legislature.
The bill faced a rocky road to the final vote. The author of SB 1047 was California State Senator Scott Wiener, who said ai-bill-sb-1047-aims-to-prevent-ai-disasters-but-silicon-valley-warns-it-will-cause-one/” data-ylk=”slk:TechCrunch;elm:context_link;elmt:doNotAffiliate;cpos:3;pos:1;itc:0;sec:content-canvas”>TechCrunch: “We have a history with technology of waiting for damage to occur and then wringing our hands. Let's not wait for something bad to happen. Let's just get through it.” Notable ai researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for ai Safety, which has been sounding the alarm about ai risks for the past year.
“Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” Newsom said in the veto message. The statement continues:
California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails must be implemented and serious consequences for bad actors must be clear and enforceable. However, I do not agree that, to keep the public safe, we should settle for a solution that is not based on an empirical analysis of the trajectory of ai systems and capabilities. Ultimately, any framework to effectively regulate ai must keep pace with the technology itself.
SB 1047 generated strong opposition across the tech space. Researcher Fei-Fei Li ai-says-californias-ai-bill-will-harm-us-ecosystem-tech-politics/” rel=”nofollow noopener” target=”_blank” data-ylk=”slk:critiqued;elm:context_link;elmt:doNotAffiliate;cpos:6;pos:1;itc:0;sec:content-canvas”>criticized the invoice, x.com/ylecun/status/1806557611413627032″ rel=”nofollow noopener” target=”_blank” data-ylk=”slk:as did;elm:context_link;elmt:doNotAffiliate;cpos:7;pos:1;itc:0;sec:content-canvas”>how he did it Yann LeCun, chief ai scientist at Meta, for limiting the potential to explore new uses of ai. The trade group that represents tech giants like amazon, Apple and Google. tech-innovation-in-ca/#:~:text=SB%201047%20introduces%20a%20form,challenged%20on%20First%20Amendment%20grounds.” rel=”nofollow noopener” target=”_blank” data-ylk=”slk:said;cpos:8;pos:1;elm:context_link;itc:0;sec:content-canvas” class=”link “>saying SB 1047 would limit new developments in the state's technology sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on ai innovators. Anthropic and other opponents of the original bill pushed amendments that were ai-disasters-before-final-vote-taking-advice-from-anthropic/” data-ylk=”slk:adopted;elm:context_link;elmt:doNotAffiliate;cpos:9;pos:1;itc:0;sec:content-canvas”>adopted in the version of SB 1047 that was approved by the California Appropriations Committee on August 15.