If you regularly follow ai updates, the ai safety bill in California should have caught your attention and is sparking a lot of debate in Silicon Valley. SB 1047, the Safe Innovation for Frontier artificial intelligence Models Act, was passed by the State Assembly and the Senate. This is a big step forward in California’s efforts to control artificial intelligence (ai). The bill has sparked a lot of debate in the tech community, especially in Silicon Valley. It is now awaiting a decision from Governor Gavin Newsom.
What is SB 1047?
SB 1047 is one of the first major pieces of ai legislation in the United States. It aims to reduce the risks of using advanced ai models. The bill is aimed at large ai manufacturers, especially those working on models that take a long time to train and cost at least $100 million. The law says these companies have to establish strict safety procedures, such as an “emergency stop” button, testing methods to look for potential risks, and annual third-party audits of their safety procedures.
The bill also created the Border Models Board, a new governing group whose job is to make sure rules are followed and provide advice on safety issues. The governor and legislature will choose the people to serve on this board. They will come from the ai industry, universities, and the open source community.
Supporters vs. Opponents
Supporters of SB 1047 argue that the bill is necessary to prevent potential misuses of ai, such as hacking or the development of autonomous weapons. State Sen. Scott Wiener, the bill's author, stresses the need to act quickly, drawing on past fights to regulate social media and data privacy.
“Let’s not wait for something bad to happen,” Wiener said, emphasizing the importance of implementing security measures immediately before artificial intelligence technologies become a global threat.
Geoffrey Hinton and Yoshua Bengio, two well-known ai experts, have backed the bill because they are concerned about the existential risks posed by uncontrolled ai development. Groups like the Center for ai Safety have also supported the bill, arguing that stopping a major ai safety incident is good for the tech industry in the long run.
On the other hand, people in Silicon Valley are strongly opposed to the bill. SB 1047 could prevent people from developing new ideas, especially startups and open-source ai developers. Some venture capital firms, such as Andreessen Horowitz (a16z), are concerned that the bill’s rules are illogical and could harm the ai ecosystem. They say that as ai models become more expensive, more startups will have to follow the bill’s strict rules, which could slow growth.
Even tech giants like Meta, OpenAI and Google have voiced their concerns. OpenAI believes that national security measures related to ai should be controlled at the federal level, not by individual states. Yann LeCun, Meta’s chief ai scientist, criticizes the bill as an overreaction to what he perceives as an “illusion of existential risk.”
The changes and the way forward
Due to the backlash, several changes were made to SB 1047. For example, potential criminal penalties were changed to civil penalties and less power was given to the California attorney general to implement the law. The changes have made resistance less strong. Dario Amodei, CEO of Anthropic, said the benefits of the bill now “likely outweigh its costs.”
Even with these changes, the bill remains controversial. Some of Silicon Valley’s biggest names, including Congressman Ro Khanna and House Speaker Nancy Pelosi, are concerned that SB 1047 could harm California’s innovation environment. The U.S. Chamber of Commerce has also criticized the bill, warning that it could force tech companies to move out of the state.
Governor Newsom's decision
The tech sector is eager to see what Governor Newsom does with the bill now that it is on his desk. Newsom has until the end of September to either reject the bill or sign it into law. If it becomes law, SB 1047 would be a great example of how artificial intelligence should be regulated in the US. This could have ripple effects on the tech sector around the world.
The controversy over SB 1047 shows how difficult it is to regulate new technologies like ai, even if the bill has yet to become law. California is at the center of the ai revolution and is still trying to figure out how to balance new ideas with safety concerns.
Sources:
- https://www.morganlewis.com/pubs/2024/08/californias-sb-1047-would-impose-new-safety-requirements-for-developers-of-large-scale-ai-models#:~:text =California State Assembly Passes New Safety and Compliance Standards
- https://www.theverge.com/2024/8/28/24229068/california-sb-1047-ai-safety-bill-passed-state-assembly-governor-newsom-signature
- https://www.techtimes.com/articles/307315/20240830/california-sb-1047-k-controversial-ai-safety-bill-recently-passed.htm
Dhanshree Shenwai is a Computer Science Engineer with extensive experience in FinTech companies spanning the Finance, Cards & Payments and Banking space and is keenly interested in the applications of artificial intelligence. She is excited to explore new technologies and advancements in today’s ever-changing world, making life easier for everyone.