In a new report, a group of California-based policies led by Fei-Fei Li, a pioneer of ai, suggests that legislators should consider the risks of ai that “have not yet been observed in the world” by creating regulatory policies of ai.
He Provisional 41 pages report Run on Tuesday comes from the Joint Policies Working Group of California in the border models, an effort organized by Governor Gavin Newsom after his veto of the controversial project of the security law of the IA of California, SB 1047 Informed.
In the report, Li, together with the UC Berkeley College of Computing Dean Jennifer Chayes and Carnegie Endowment co-authors for the president of the international Mariano-Florentino Peace Cuhllar, they argue in favor of the laws that would increase transparency in what the border laboratories such as Openai are building. The stakeholders of the industry of the entire ideological spectrum reviewed the report before publication, including the security defenders of Firm as the winner of the Yoshua Benjio Turing Award, as well as those who discussed against SB 1047, as the co -founder of Databricks Ion Stoica.
According to the report, the new risks raised by ai Systems may require laws that would force ai models developers to publicly inform their safety tests, data acquisition practices and security measures. The report also advocates the increase in standards around third -party evaluations of these corporate metrics and policies, in addition to the expanded protections of complainants for employees and contractors of the ai company.
Li et al. Write that there is a “non -conclusive level of evidence” for ai's potential to help carry out cyber attacks, create biological weapons or generate other “extreme” threats. However, they also argue that the policy of ai should not only address current risks, but also anticipate the future consequences that could occur without sufficient safeguards.
“For example, we do not need to observe a nuclear weapon (explosion) to reliably predict that it could and would cause extensive damage,” says the report. “If those who speculate on the most extreme risks are correct, and we are not sure if they will be, then the bets and inaction costs in Frontier ai at this current time are extremely high.”
The report recommends a two -pointed strategy to promote the transparency of the development of the ai model: trust but verify. IA and employee models developers must receive roads to inform about areas of public concern, according to the report, such as internal security tests, while there is required that present evidence claims for third -party verification.
Although the report, whose final version will be released in June 2025, does not support specific legislation, has been well received by experts on both sides of the debate on the formulation of ai policies.
Dean Ball, a research member focused on ai at George Mason University that criticized SB 1047, said in an x post that the report was <a target="_blank" rel="nofollow" href="https://x.com/deanwball/status/1902154214068928688″>A promising step for the safety regulation of ai of California. It is also a victory for ai security defenders, according to California state senator Scott Wiener, who presented SB 1047 last year. Wiener said in a press release that the report is based on “urgent conversations about the governance of ai that we begin in the Legislature (in 2024)”.
The report seems to align with several components of the SB 1047 and Wiener monitoring bill, SB 53, as demanding the developers of the ai model to report the results of the security tests. Having a broader vision, it seems to be a very necessary victory for IA security people, whose agenda has lost ground in the last year.
(Tagstotranslate) ai Safety (T) SB 1047