Before Sam Altman was ousted from OpenAI last week, he and the company’s board of directors had been arguing for more than a year. The tension worsened when OpenAI became a household name thanks to its popular ChatGPT chatbot.
At one point, Altman, the CEO, made the decision to oust one of the board members because he thought a research paper she had co-written was critical of the company.
Another member, Ilya Sutskever, thought Altman wasn’t always honest when speaking to the board. And some board members worried that Altman was too focused on expansion while wanting to balance that growth with ai safety.
The news that he was being ousted came in a video conference on Friday afternoon, when Sutskever, who had worked closely with Altman at OpenAI for eight years, read him a statement. The decision surprised OpenAI employees and exposed board members to difficult questions about their qualifications to run such a high-profile company.
Those tensions apparently came to an end Tuesday night when Altman was reinstated as CEO. Sutskever and other Altman critics were ousted from the board, whose members now include Bret Taylor, an early Facebook official and former Salesforce co-CEO, and Larry Summers, former Treasury Department secretary. The only one left is Adam D’Angelo, CEO of the question and answer site Quora.
The OpenAI debacle has illustrated how building ai systems is testing whether entrepreneurs who want to make money from artificial intelligence can work in sync with researchers who fear that what they are building could eventually eliminate jobs or become into a threat if technologies such as autonomous weapons emerge from control.
OpenAI was started in 2015 with an ambitious plan to one day create a super-intelligent automated system that can do everything a human brain can do. But the friction plagued the company’s board of directors, which had not even been able to agree on replacements for members who had resigned.
Before Mr. Altman’s return, the company’s continued existence was in doubt. Nearly all of OpenAI’s 800 employees had threatened to follow Altman to Microsoft, which asked him to run an ai lab with Greg Brockman, who resigned from his roles as president and chairman of the board of OpenAI in solidarity with Altman.
The board had told Brockman that he would no longer be president of OpenAI, but invited him to remain with the company, although he was not invited to the meeting where the decision was made to expel him from the board and Altman from it. the company.
The problems with OpenAI’s board of directors date back to the nonprofit company’s beginnings. In 2015, Altman teamed up with Elon Musk and others, including Sutskever, to create a nonprofit to develop ai that would be safe and beneficial to humanity. They planned to raise money from private donors for their mission. But after a few years they realized that their IT needs required much more funding than they could raise from individuals.
After Musk left in 2018, they created a for-profit subsidiary that began raising billions of dollars from investors, including $1 billion from Microsoft. They said the subsidiary would be controlled by the nonprofit board and that each director’s fiduciary duty would be to “humanity, not OpenAI investors,” the company said. he said on his website.
Among the tensions that led to Altman’s ouster and quick return was his conflict with Helen Toner, a board member and chief strategy officer at Georgetown University’s Center for Security and Emerging technology. A few weeks before Altman’s firing, he met with Toner to discuss an article she had co-written for the Georgetown center.
Mr. Altman complained that the research work appeared to criticize OpenAI’s efforts to keep its ai technologies secure while praising the approach taken by Anthropic, a company that has become OpenAI’s biggest rival, according to an email Altman wrote to colleagues and which was seen by The New York Times.
In the email, Altman said he had reprimanded Toner for the article and that it was dangerous for the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI for the data used to build its technology. .
Toner defended it as an academic paper that looks at the challenges the public faces when trying to understand the intentions of countries and companies developing ai. But Altman disagreed.
“I didn’t feel like we were on the same page about the harm of all of this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight.”
OpenAI’s senior leaders, including Mr. Sutskever, who is deeply concerned that ai could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the talks said.
But soon after those discussions, Sutskever did the unexpected: He sided with board members to oust Altman, according to two people familiar with the board’s deliberations. The statement read to Mr. Altman said that Mr. Altman was being fired because he was not “consistently candid in your communications with the board of directors.”
Sutskever’s frustration with Altman echoed what had happened in 2021 when Another senior ai scientist left OpenAI to form Anthropic. That scientist and other researchers went to the board to try to oust Mr. Altman. After failing, they gave up and left, according to three people familiar with the attempt to oust Altman.
“After a series of reasonably amicable negotiations, Anthropic’s co-founders were able to negotiate their exit on mutually acceptable terms,” said an Anthropic spokeswoman, Sally Aldous. In a second statement, Anthropic added that “there was no attempt to ‘oust’ Sam Altman at the time Anthropic’s founders left OpenAI.”
The vacancies exacerbated the board’s problems. This year, he disagreed on how to replace three outgoing directors: Reid Hoffman, founder of LinkedIn and a member of Microsoft’s board of directors; Shivon Zilis, chief operating officer of Neuralink, a company founded by Musk to implant computer chips in people’s brains; and Will Hurd, former Republican congressman from Texas.
After vetting four candidates for a position, the remaining directors could not agree on who should fill it, two people familiar with the board’s deliberations said. The impasse hardened the division between Altman and Brockman and other board members.
Hours after Altman was ousted, OpenAI executives confronted the remaining board members during a video call, according to three people who were on the call.
During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was jeopardizing the company’s future by ousting Altman. This, he said, violated members’ responsibilities.
Mrs. Toner disagreed. The board’s mission was to ensure that the company created artificial intelligence that “benefits all of humanity,” and if the company were destroyed, she said, that could be consistent with its mission. In the board’s opinion, OpenAI would be stronger without Altman.
On Sunday, Brockman’s wife, Anna, urged Sutskever at the OpenAI office to change course, according to two people familiar with the exchange. Hours later he signed a letter with other employees demanding the resignation of the independent directors. The confrontation between Mr. Sutskever and Ms. Brockman was previously reported by tech/openai-employees-threaten-to-quit-unless-board-resigns-bbd5cc86″ title=”” rel=”noopener noreferrer” target=”_blank”>The Wall Street Journal.
At 5:15 a.m. Monday, he posted in Xpreviously Twitter, that “I deeply regret my participation in the board’s actions.”