In light of criticism over its approach to ai safety, OpenAI has formed a new committee to oversee “critical” safety and security decisions related to the company's projects and operations. But in a move sure to draw the ire of some ethicists, OpenAI chose to staff the committee exclusively with company members, including OpenAI CEO Sam Altman, rather than outside observers.
Altman and the rest of the Security Committee: OpenAI board members Bret Taylor, Adam D'Angelo and Nicole Seligman, as well as chief scientist Jakub Pachocki, Aleksander Madry (who leads the OpenAI “preparation” team) , Lilian Weng (head of systems security), Matt Knight (head of security) and John Schulman (head of “alignment science”), will be responsible for evaluating OpenAI's processes and safeguards over the next 90 days, according to a post on the company's official blog. The committee will then share its findings and recommendations with the entire OpenAI board of directors for review, at which time it will publish an update on any suggestions adopted “in a manner that is consistent with security.”
“OpenAI has recently begun training its next frontier model and we anticipate that the resulting systems will take us to the next level of capabilities on our path to (artificial general intelligence),” OpenAI writes. “While we are proud to build and launch models that are industry-leading in both capabilities and safety, we welcome robust debate at this important time.”
OpenAI has seen in recent months several high-profile departures from the security side of their technical team, and some of these former employees have expressed concern about what they see as an intentional deprioritization of ai security.
Daniel Kokotajlo, who worked on the OpenAI governance team, abandon in April after losing confidence that OpenAI would “behave responsibly” around the release of increasingly capable ai, as he wrote in a post on his personal blog. And Ilya Sutskever, co-founder of OpenAI and the company's former chief scientist, left in May after a protracted battle with Altman and Altman's allies, reportedly in part over Altman's rush to launch ai-powered products at the expense of security work.
More recently, Jan Leike, a former DeepMind researcher who while at OpenAI was involved in the development of ChatGPT and ChatGPT's predecessor InstructGPT, stepped down from his security research role and said in a series of posts on x that he believed that OpenAI was “not on track” to address ai safety issues “correctly.” ai policy researcher Gretchen Krueger, who left OpenAI last week, echoed Leike's statements: x.com/GretchenMarina/status/1793403475260551517″>calling the company improve its accountability and transparency and “the care with which (it uses its) own technology.”
<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter“>
Quartz grades that, in addition to Sutskever, Kokotajlo, Leike and Krueger, at least five of OpenAI's most security-conscious employees have resigned or been ousted since late last year, including former OpenAI board members Helen Toner and Tasha McCauley. In an opinion piece for The Economist tech/former-openai-board-members-company-003035946.html”>published On Sunday, Toner and McCauley wrote that, with Altman at the helm, they don't believe OpenAI can be trusted to take responsibility.
“Based on our experience, we believe that self-government cannot reliably withstand the pressure of profit incentives,” Toner and McCauley said.
For Toner and McCauley, TechCrunch reported earlier this month that OpenAI's Superalignment team, responsible for developing ways to govern and direct “super-intelligent” ai systems, was promised 20% of the company's computing resources, but rarely once received a fraction of that. The Superalignment team has since been disbanded, with much of its work falling under Schulman's responsibility, and an OpenAI security advisory group was formed in December.
OpenAI has advocated for ai regulation. At the same time, it has made efforts to shape this regulation, hiring an in-house lobbyist and lobbyists at a growing number of law firms and spending hundreds of thousands on lobbying in the US alone in the fourth quarter of 2023. Recently, the US Department of Homeland Security announced that Altman would be among the members of its newly formed artificial intelligence Security Board, which will provide recommendations for the “secure development and deployment of ai” across US critical infrastructure.
In an effort to avoid the appearance of an ethical fig leaf with the executive-dominated Safety and Security Committee, OpenAI has committed to hiring external “safety and security” experts to support the committee's work, including cybersecurity veteran Rob Joyce and John Carlin, former official of the United States Department of Justice. However, beyond Joyce and Carlin, the company has not detailed the size or composition of this group of outside experts, nor has it shed light on the limits of the group's power and influence over the committee.
in a x.com/parmy/status/1795416683382792211″>mail In terms of actual supervision.” It is telling that OpenAI says is seeking to address “valid criticisms” of its work through the committee; “valid criticism” is in the eye of the beholder, of course.
<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter“>
Altman once promised that outsiders would play an important role in OpenAI's governance. In a 2016 article in the New Yorker, saying that OpenAI “(would) plan a way to allow large areas of the world to elect representatives to a… governing board.” That never happened, and seems very unlikely to happen at this point.
<script async src="//platform.twitter.com/widgets.js” charset=”utf-8″>