OpenAI and Anthropic have ai-safety-institute-signs-agreements-regarding-ai-safety-research” rel=”nofollow noopener” target=”_blank” data-ylk=”slk:agreed;elm:context_link;elmt:doNotAffiliate;cpos:1;pos:1;itc:0;sec:content-canvas”>agreed share ai models (before and after they are released) with the U.S. ai Safety Institute. The agency, established through an executive order by President Biden in 2023, will offer safety feedback to companies to improve their models. OpenAI CEO Sam Altman hinted at the arrangement earlier this month.
The US ai Safety Institute did not name any other companies that deal with ai, but in a statement to Engadget, a Google spokesperson told Engadget that the company is in talks with the agency and will share more information when it becomes available. This week, Google began rolling out updated chatbot and image generator models for Gemini.
“Safety is essential to driving breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of ai safety,” Elizabeth Kelly, director of the US ai Safety Institute, wrote in a statement. “These agreements are just the beginning, but they are an important milestone as we work to help responsibly manage the future of ai.”
The U.S. ai Safety Institute is part of the National Institute of Standards and technology (NIST). It creates and publishes guidelines, benchmarks, and best practices for testing and evaluating potentially dangerous ai systems. “Just as ai has the potential to do profound good, it also has the potential to do profound harm — from ai-enabled cyberattacks on a scale beyond anything we’ve seen before to ai-engineered bioweapons that could endanger the lives of millions,” Vice President Kamala Harris said in late 2023 after the agency was created.
The first such agreement has been signed via a memorandum of understanding (formal but non-binding). The agency will have access to each company’s “new core models” before and after their public release. The agency describes the agreements as collaborative research that mitigates risks and will assess capabilities and safety. The US-based ai Safety Institute will also collaborate with the UK-based ai Safety Institute.
This comes as federal and state regulators try to set limits on ai while the rapidly advancing technology is still in its infancy. On Wednesday, the California state assembly technology/artificial-intelligence/contentious-california-ai-bill-passes-legislature-awaits-governors-signature-2024-08-28/” rel=”nofollow noopener” target=”_blank” data-ylk=”slk:approved;elm:context_link;elmt:doNotAffiliate;cpos:9;pos:1;itc:0;sec:content-canvas”>approved An ai safety bill (SB 10147) that mandates safety testing for ai models that cost more than $100 million to develop or require a certain amount of computing power. The bill requires ai companies to have safety switches that can shut down models if they become “unwieldy or uncontrollable.”
Unlike the nonbinding agreement with the federal government, California’s bill would have some advantages for enforcement. It gives the state’s attorney general license to sue if ai developers don’t comply, especially during high-threat events. However, it still requires one more vote on the process and the signature of Gov. Gavin Newsom, who will have until Sept. 30 to decide whether to give it the green light.
Update, August 29, 2024, 4:53 p.m. ET: This story has been updated to add a response from a Google spokesperson.