OpenAI is turning its Safety and Security Committee into an independent “Board oversight committee” that has the authority to delay model releases over safety concerns. According to an OpenAI blog postThe committee made the recommendation to create an independent board after a recent 90-day review of OpenAI’s “security-related processes and safeguards.”
The committee, chaired by Zico Kolter and consisting of Adam D'Angelo, Paul Nakasone, and Nicole Seligman, “will receive input from company management on security assessments for major model releases and, in conjunction with the full board, will exercise oversight over model releases, including the authority to delay a release until security concerns are addressed,” OpenAI says. OpenAI's full board of directors will also receive “regular reports” on “security issues.”
OpenAI's security committee members are also members of the company's board of directors, so it's unclear exactly how independent the committee is or how that independence is structured. We've reached out to OpenAI for comment.
By establishing an independent safety board, OpenAI appears to be taking a similar approach to Meta's Oversight Board, which reviews some of Meta's content policy decisions and can make decisions that Meta must follow. Members of the Supervisory Board are in Meta Board of Directors.
The OpenAI Safety and Security Committee review also led to “additional opportunities for industry collaboration and information sharing to improve the safety of the ai industry.” The company also says it will look for “more ways to share and explain our safety work” and “more opportunities to conduct independent testing of our systems.”