The power of OpenAI The fight that captivated the tech world after the firing of co-founder Sam Altman has finally come to an end, at least for the moment. But what to do with it?
It seems almost as if some praise is needed, as if OpenAI died and a new, but not necessarily improved, startup found itself in the middle of it. Altman, former president of Y Combinator, is back at the helm, but is his return justified? OpenAI’s new board is off to a less diverse start (i.e., it’s entirely white and male), and the company’s founding philanthropic goals are in danger of being co-opted by more capitalist interests.
That’s not to say that the old OpenAI was perfect by any means.
As of Friday morning, OpenAI had a six-person board of directors: Altman, OpenAI chief scientist Ilya Sutskever, OpenAI president Greg Brockman, tech entrepreneur Tasha McCauley, Quora CEO Adam D’ Angelo, and Helen Toner, director of Georgetown’s Center for Security and Emerging Technologies. . The board was technically linked to a nonprofit organization that had a majority stake in the for-profit side of OpenAI, with absolute decision-making power over the activities, investments, and overall direction of OpenAI for-profit.
OpenAI’s unusual structure was established by the company’s co-founders, including Altman, with the best of intentions. The nonprofit’s exceptionally brief (500-word) bylaws describe that the board must make decisions that ensure “that artificial general intelligence benefits all humanity,” leaving it up to board members to decide which is best. way to interpret that. Neither “earnings” nor “revenue” are mentioned in this North Star document; Toner tech/ai/altman-firing-openai-520a3a8c” target=”_blank” rel=”noopener”>reportedly He once told Altman’s executive team that triggering OpenAI’s collapse “would actually be consistent with (the nonprofit’s) mission.”
Perhaps the arrangement would have worked in some parallel universe; For years, it seemed to work pretty well on OpenAI. But once powerful investors and partners got involved, things became… more complicated.
Altman’s firing unites Microsoft and OpenAI employees
After the board abruptly fired Altman on Friday without notifying anyone, including most of OpenAI’s 770-person workforce, the startup’s backers began expressing their discontent both privately and publicly.
Satya Nadella, the CEO of Microsoft, a major contributor to OpenAI, was allegedly “furious” upon learning of Altman’s departure. Vinod Khosla, founder of Khosla Ventures, another backer of OpenAI, said on X (formerly Twitter) that the fund sought Altman returns. Meanwhile, Thrive Capital, Khosla Ventures, Tiger Global Management and Sequoia Capital were said to be contemplating legal action against the board if weekend negotiations to reinstate Altman did not go as expected.
Now, OpenAI employees were not Not aligned with these investors from external appearances. On the contrary, nearly all of them – including Sutskever, in an apparent change of heart – signed a letter threatening the board with mass resignation if they chose not to reverse course. But you have to consider that these OpenAI employees had a lot to lose if OpenAI fell apart: job offers from Microsoft and Sales force apart.
OpenAI had been in talks, led by Thrive, to possibly sell employee shares in a move that would have increased the company’s valuation from $29 billion to between $80 billion and $90 billion. Altman’s sudden departure, and OpenAI’s rotating cast of questionable interim CEOs, gave Thrive a scare, putting the sale in jeopardy.
Altman won the five-day battle, but at what cost?
But now, after several days of breathlessness and hair-pulling, some sort of resolution has been reached. Altman, along with Brockman, who resigned Friday in protest of the board’s decision, is back, though subject to a background investigation into concerns that precipitated his ouster. OpenAI has a new transition board that satisfies one of Altman’s demands. And OpenAI will reportedly maintain its structure, with limits on investor profits and freedom for the board of directors to make decisions that are not based on revenue.
Salesforce CEO Marc Benioff posted on X that “the good guys” won. But perhaps it is premature to say.
Sure, Altman “won,” overcoming a board that accused him of “not (being) consistently truthful” with board members and, according to some reports, of putting growth before mission. In an example of this alleged mischief, Altman was technology/openai-altman-board-fight.html” target=”_blank” rel=”noopener”>it is said that it was criticized Toner for a paper she co-authored that presented OpenAI’s approach to security from a critical perspective, to the point where he attempted to kick her off the board. In another, Altman”enraged” Sutskever on rushing the release of ai-powered features at the first OpenAI developer conference.
The board gave no explanation even after repeated opportunities, citing possible legal challenges. And it’s safe to say that they dismissed Altman in an unnecessarily histrionic way. But there’s no denying that the directors might have had valid reasons for letting Altman go, at least depending on how they interpreted his humanist directive.
It seems likely that the new board will interpret that directive differently.
Currently, OpenAI’s board of directors consists of former Salesforce co-CEO Bret Taylor, D’Angelo (the only holdover from the original board), and Larry Summers, the economist and former Harvard president. Taylor is an entrepreneur’s entrepreneur and has co-founded numerous companies, including FriendFeed (acquired by Facebook) and Quip (through whose acquisition he landed at Salesforce). Summers, meanwhile, has deep business and government connections, an asset for OpenAI likely went into selecting him, at a time when regulatory scrutiny of ai is intensifying.
However, the directors do not seem like an absolute “victory” for this journalist, not if the intention was diversity of points of view. While six seats remain to be filled, the initial four set a fairly homogeneous tone; In fact, such a meeting would be illegal in Europe, which mandates Companies reserve at least 40% of their board seats for female candidates.
Why some ai experts are worried about OpenAI’s new board
I’m not the only one who is disappointed. Several ai academics took to X to express their frustrations today.
Noah Giansiracusa, a mathematics professor at Bentley University and author of a book on social media recommendation algorithms, takes issue with both the all-male makeup of the board and the nomination of Summers, who he says has a history of do unflattering comments about women.
“Whatever opinion one makes of these incidents, the optics are not good, to say the least, especially for a company that has been leading the development of ai and reshaping the world we live in,” Giansiracusa said for text message. “What I find particularly concerning is that OpenAI’s primary goal is to develop general artificial intelligence that ‘benefits all humanity.’ Given that half of humanity is women, recent events don’t give me much confidence in this regard. Toner most directly represents the safety side of ai, and this has often been the position women have been placed in, throughout history, but especially in technology: protecting society from great damage while men get the credit for innovating and ruling the world.”
Christopher Manning, director of the Sanford ai Lab, is a bit more charitable than Giansiracusa, but agrees with him in his assessment:
“The newly formed OpenAI board is presumably still incomplete,” he told TechCrunch. “However, the current board of directors, which lacks anyone with deep knowledge about the responsible use of ai in human society and is composed solely of white men, is not a promising start for such a large and influential ai company.” .
Inequity affects the ai industry, from the ai/ai-boom-is-dream-and-nightmare-for-workers-in-global-south” target=”_blank” rel=”noopener”>scorers that label data used to train generative ai models based on harmful biases that often arise in those trained models, technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/” target=”_blank” rel=”noopener”>including OpenAI models. Summers, to be fair, has expressed concern about the potential harmful ramifications of ai, at least as it relates to livelihoods. But critics I spoke to find it hard to believe that a board like OpenAI’s current one would consistently prioritize these challenges, at least not in the way a more diverse board would.
It begs the question: Why didn’t OpenAI try to recruit a well-known ai ethicist like Timnit Gebru or Margaret Mitchell to the initial board? Were they “unavailable”? Did they refuse? Or did OpenAI not make any effort in the first place? We may never know.
Reportedly, OpenAI considered Laurene Powell Jobs and Marissa Mayer for board positions, but they were considered too close to Altman. Condoleezza Rice’s name was also mentioned, but ultimately overlooked.
OpenAI has a chance to prove itself wiser and more worldly by selecting the five remaining board seats — or three, should Altman and a Microsoft executive each take one (as has been rumored). If they don’t take a more diverse path, what Daniel Colson, director of the ai Policy Institute think tank, saying about X may well be true: a few people or a single laboratory cannot be trusted to ensure that ai is developed responsibly.
Updated 11/23 at 11:26am ET: Incorporated a post from Timnit Gebru and information from a report on potential overlooked OpenAI board members.