In the summer of 2023, OpenAI created a “Superalignment” team whose goal was to direct and control future ai systems that could be so powerful that they could lead to human extinction. Less than a year later, that team is dead.
OpenAI technology-vp&sref=10lNAhZ9″ rel=”nofollow noopener” target=”_blank” data-ylk=”slk:told;cpos:1;pos:1;elm:context_link;itc:0;sec:content-canvas” class=”link “>said Bloomberg that the company was “integrating the group more deeply into its research efforts to help the company achieve its security goals.” But a series of tweets from Jan Leike, one of the team leaders who recently resigned, revealed internal tensions between the security team and the company as a whole.
In a sentence twitter.com/janleike/status/1791498174659715494″ rel=”nofollow noopener” target=”_blank” data-ylk=”slk:posted on x;cpos:2;pos:1;elm:context_link;itc:0;sec:content-canvas” class=”link “>published in x On Friday, Leike said the Superalignment team had been struggling for resources to conduct the investigation. “Building machines smarter than humans is an inherently dangerous task,” Leike wrote. “OpenAI takes on an enormous responsibility on behalf of all humanity. But in recent years, safety culture and processes have taken a backseat to brilliant products.” OpenAI did not immediately respond to a request for comment from Engadget.
Leike's departure earlier this week came hours after OpenAI chief scientist Sutskevar announced he was leaving the company. Sutskevar was not only one of the leaders of the Superalignment team, but he also helped co-found the company. Sutskevar's move came six months after he was involved in the decision to fire CEO Sam Altman over concerns that Altman had not been “consistently candid” with the board. Altman's all-too-brief dismissal sparked an internal revolt within the company, with nearly 800 employees signing a letter threatening to resign if Altman was not reinstated. Five days later, Altman returned as CEO of OpenAI after Sutskevar signed a letter in which he stated that he regretted his actions.
When Announced After the creation of the Superalignment team, OpenAI said it would dedicate 20 percent of its computing power over the next four years to solving the problem of controlling the powerful ai systems of the future. “(Getting) this right is critical to achieving our mission,” the company wrote at the time. In x, Leike x.com/janleike/status/1791498182125584843″ rel=”nofollow noopener” target=”_blank” data-ylk=”slk:wrote;elm:context_link;elmt:doNotAffiliate;cpos:6;pos:1;itc:0;sec:content-canvas”>wrote that the Superalignment team was “struggling with computing and it was becoming increasingly difficult” to conduct crucial research into ai safety. “For the past few months, my team has been sailing against the wind,” he wrote, adding that he had reached “a breaking point” with OpenAI's leadership over disagreements over the company's core priorities.
In recent months, there have been more departures from the Superalignment team. In April, OpenAI reportedly fired two researchers, Leopold Aschenbrenner and Pavel Izmailov, for allegedly leaking information.
OpenAI said Bloomberg that its future security efforts will be led by John Schulman, another co-founder, whose research focuses on large language models. Jakub Pachocki, director who led the development of GPT-4, one of OpenAI's flagship large language models, replace Sutskevar as chief scientist.
Superalignment wasn't the only OpenAI team focused on ai safety. In October, the company began a new “preparedness” team to curb potential “catastrophic risks” from artificial intelligence systems, including cybersecurity issues and chemical, nuclear and biological threats.
Update, May 17, 2024, 3:28 pm ET: In response to a request for comment on Leike's allegations, an OpenAI public relations person directed Engadget to Sam Altman. x.com/sama/status/1791543264090472660″ rel=”nofollow noopener” target=”_blank” data-ylk=”slk:tweet;cpos:10;pos:1;elm:context_link;itc:0;sec:content-canvas” class=”link “>cheep saying that he would say something in the next few days.
This article contains affiliate links; If you click on that link and make a purchase, we may earn a commission.
<script async src="//platform.twitter.com/widgets.js” charset=”utf-8″>