OpenAI has created a new Safety Committee less than two weeks after the company disbanded the team tasked with protecting humanity from existential ai threats. This latest version of the group responsible for OpenAI's guardrails will include two board members and CEO Sam Altman, raising questions about whether the move is little more than self-policing theater amid a dizzying race. for profits and dominance with its partner Microsoft.
The Safety Committee, formed by the OpenAI board of directors, will be led by board members Bret Taylor (chairman), Nicole Seligman, Adam D'Angelo and Sam Altman (CEO). The new team follows the high-profile resignations of co-founder Ilya Sutskever and Jan Leike, which raised more than a few eyebrows. His former “Super Alignment Team” was created just last July.
After his resignation, Leike x.com/janleike/status/1791498187313963308″ rel=”nofollow noopener” target=”_blank” data-ylk=”slk:wrote;elm:context_link;elmt:doNotAffiliate;cpos:7;pos:1;itc:0;sec:content-canvas”>wrote in a May 17 x (twitter) thread that, although he believed in the company's core mission, he left because the two sides (product and security) “reached a breaking point.” Leike added that he was “concerned that we are not on a trajectory” to adequately address security-related issues as ai becomes smarter. He posted that the Superalignment team had recently been “sailing against the wind” within the company and that “safety culture and processes have taken a backseat to brilliant products.”
A cynical view would be that a company focused primarily on “shiny products” – while trying to defend itself from the PR hit of high-profile security departures – could create a new security team led by the same people rushing toward those brilliant products.
The departures for security reasons earlier this month weren't the only troubling news from the company recently. He also released (and quickly brought out) a new voice model that sounded remarkably like two-time Oscar nominee Scarlett Johansson. He Jojo Rabbit The actor then revealed that OpenAI Sam Altman had requested her consent to use her voice to train an ai model, but she had refused.
In a statement to Engadget, Johansson's team said they were surprised that OpenAI chose a voice artist who “sounded so eerily similar” to her after requesting permission. The statement added that “Johansson's closest friends and media couldn't tell the difference.”
OpenAI also walked back the non-disparagement agreements it had required from outgoing executives, changing its tone to say it would not enforce them. Before that, the company forced outgoing employees to choose between being able to speak out against the company and keeping the vested equity they earned.
The Safety Committee plans to “evaluate and further develop” the company's processes and safeguards over the next 90 days. After that, the group will share its recommendations with the entire board. After the entire leadership team reviews its findings, it will “publicly share an update on the adopted recommendations in a manner that is consistent with safety.”
In its blog post announcing the new Security Committee, OpenAI confirmed that the company is currently training its next model, which will succeed GPT-4. “While we are proud to build and launch models that are industry-leading in both capabilities and safety, we welcome robust debate at this important time,” the company wrote.