This is the thing about asking investors for money: they want to see profitability.
OpenAI was launched with a famously altruistic mission: to help humanity by developing artificial general intelligence. But along the way, it became one of the best-funded companies in Silicon Valley. Now, the tension between those two events is coming to a head.
Weeks after launching a new model that it claims can “reason,” OpenAI is moving to abandon its nonprofit status, with some of its most senior employees leaving and CEO Sam Altman, who once was briefly ousted over apparent trust concerns, is solidifying his position. as one of the most powerful people in technology.
On Wednesday, Mira Murati, OpenAI's longtime CTO, announced she was leaving “to create the time and space to do my own exploration.” On the same day, the research director x.com/bobmcgrewai/status/1839099787423134051″>Bob McGrew and vice president of post training x.com/barret_zoph/status/1839095143397515452″>Barrett Zoph They said they would leave too. Altman called leadership changes “a natural part of business” x.com/sama/status/1839096160168063488″>in a post after Murati's announcement.
“Obviously I won't pretend that it's natural for this to be so abrupt, but we are not a normal company,” Altman wrote.
But it continues a trend of departures that has been accumulating over the past year, following the board's failed attempt to fire Altman. OpenAI co-founder and chief scientist Ilya Sutskever, who broke Altman the news of his firing before publicly retracting his criticism, left OpenAI in May. Jan Leike, a key researcher at OpenAI, resigned a few days later, saying that “security culture and processes have taken a backseat to shiny products.” Almost all of OpenAI's board members at the time of the ouster, except Quora CEO Adam D'Angelo, resigned and Altman got a position.
He has since reshaped the company that once fired Altman for “not always being truthful in his communication.”
It is no longer just a “donation”
OpenAI started as a nonprofit lab and later became a for-profit subsidiary, OpenAI LP. The for-profit arm may raise funds to build artificial general intelligence (AGI), but the nonprofit's mission is to ensure that AGI benefits humanity.
In a bright pink box on a OpenAI board structure webpagethe company highlights that “it would be prudent” to consider any investment in OpenAI “in the spirit of a donation” and that investors “could not see any return.”
Investor returns are capped at 100 times, and the excess return helps the nonprofit prioritize social benefits over financial gains. And if the for-profit side deviates from that mission, the nonprofit side can intervene.
We have far surpassed the “spirit of giving” here
Reports claim that OpenAI is now approaching a valuation of $150 billion. approximately 37.5 times your reported income – with no path to profitability in sight. It is looking to raise funds from companies such as Thrive, Apple and an investment firm backed by United Arab Emirateswith a minimum investment of a quarter of a million dollars.
OpenAI doesn't have a lot of money or established companies like Google or Meta, which are building competitive models (although it's worth noting that these are public companies with their own responsibilities to Wall Street). ai startup Anthropic, founded by former OpenAI researchers, is hot on OpenAI's heels as it looks to raise new funding with a valuation of 40 billion dollars. Here we have far surpassed the “spirit of donation”.
OpenAI's “for-profit run by a nonprofit” structure puts it at a disadvantage. So it made a lot of sense that Altman told employees Earlier this month, OpenAI was set to restructure as a for-profit company next year. This week, Bloomberg reported that the company is considering becoming a public benefit corporation (like Anthropic) and that investors plan to give Altman a 7 percent stake. (Altman almost immediately denied this in a staff meeting, calling it “ridiculous.”)
And, more importantly, in the course of these changes, OpenAI's nonprofit parent company technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/”>I would supposedly lose control. Just a few weeks after this news was reported, Murati and company were out.
Both Altman and Murati claim that the timing is just a coincidence and that the CTO is simply looking to leave while the company is “improving.” Murati (through representatives) refused to speak with The edge about the sudden movement. Wojciech Zaremba, one of the last remaining OpenAI co-founders, compared x.com/woj_zaremba/status/1839696945008582672?s=46&t=JbshpLj_RqMDHpGS2uZiWw”>the exits to “the difficulties that parents faced in the Middle Ages when 6 out of 8 children died.”
Whatever the reason, this marks a near-total turnover of OpenAI leadership since last year. Besides Altman himself, the last remaining member seen in September 2023 cabling cover is president and co-founder Greg Brockman, who backed Altman during the coup. But even he is personal license status since August and is not expected to return until next year. The same month he was let go, another co-founder and key leader, John Schulman, left to work at Anthropic.
When contacted for comment, OpenAI spokesperson Lindsay McCallum Rémy said The edge to previous comments made to CNBC.
And it is no longer just a “research laboratory”
As Leike hinted in his farewell message to OpenAI about “brilliant products,” converting the research lab into a for-profit company puts many of its long-term employees in an uncomfortable position. Many probably joined to focus on ai research, not to create and sell products. And while OpenAI remains a nonprofit, it's not hard to guess how a profit-focused version would work.
Research laboratories work on longer deadlines than revenue-seeking companies. They can delay product launches when necessary, with less pressure to launch quickly and scale. Perhaps most importantly, they can be more conservative when it comes to security.
There is already evidence that OpenAI is focusing on quick launches rather than cautious ones: source said Washington Post in July that technology/2024/07/12/openai-ai-safety-regulation-gpt4/”>the company organized a launch party for GPT-4o “before knowing if it was safe to launch.” tech/ai/open-ai-division-for-profit-da26c24b”>The Wall Street Journal reported Friday that security staff were working 20 hours a day and did not have time to double-check their work. Initial test results showed that GPT-4o was not secure enough to be deployed, but it was deployed anyway.
Meanwhile, OpenAI researchers continue to work on building what they see as the next steps toward human-level artificial intelligence. o1, OpenAI's first “reasoning” model, is the start of a new series that the company hopes will power intelligent automated “agents.” The company is constantly rolling out features just ahead of its competitors: This week, it launched Advanced Voice Mode for all users just a few days before Meta announced a similar product at Connect.
So what is OpenAI becoming? All signs point to a conventional technology company under the control of a powerful executive – exactly the structure it was built to avoid.
“I think this will hopefully be a great transition for everyone involved and I hope that OpenAI will be stronger, as we are for all our transitions,” Altman said on stage at Italian tech Week just after it was announced Murati's departure.