What happened at OpenAI over the past five days could be described in many ways: a juicy boardroom drama, a tug-of-war over one of America’s biggest startups, a clash between those who want to see ai progress. faster and those who want to slow it down. below.
But the most important thing was a fight between two competing visions of artificial intelligence.
According to one vision, ai is a transformative new tool, the latest in a line of world-changing innovations that includes the steam engine, electricity and the personal computer, and which, if put to the right use, could mark the beginning of a new era of prosperity and generate large amounts of money for companies that take advantage of its potential.
In another vision, ai is something closer to an alien life form (a leviathan summoned from the mathematical depths of neural networks) that must be restricted and deployed with extreme caution to prevent it from taking over and killing us all. .
With Sam Altman’s return on Tuesday to OpenAI, the company whose board fired him as CEO last Friday, the battle between these two views appears to be over.
The Capitalism team won. Team Leviathan lost.
OpenAI’s new board will consist of three people, at least initially: Adam D’Angelo, CEO of Quora (and the only holdover from the old board); Bret Taylor, former Facebook and Salesforce executive; and Lawrence H. Summers, former Secretary of the Treasury. The board is expected to grow from there.
OpenAI’s largest investor, Microsoft, is also expected to have a greater say in OpenAI’s governance in the future. That may include a seat on the board of directors.
Three of the members who pushed for Altman’s removal have disappeared from the board: Ilya Sutskever, chief scientist at OpenAI (who has since retracted his decision); Helen Toner, director of strategy at Georgetown University’s Center for Security and Emerging technology; and Tasha McCauley, entrepreneur and researcher at the RAND Corporation.
Sutskever, Toner and McCauley are representative of the type of people who were heavily involved in thinking about ai a decade ago: an eclectic mix of academics, Silicon Valley futurists and computer scientists. They viewed the technology with a mix of fear and awe, and worried about theoretical future events like the “singularity,” a point at which ai would surpass our ability to contain it. Many were affiliated with philosophical groups such as Effective Altruists, a movement that uses data and rationality to make moral decisions, and were persuaded to work on ai by a desire to minimize the technology‘s destructive effects.
This was the vibe around ai in 2015, when OpenAI was formed as a nonprofit, and it helps explain why the organization maintained its intricate governance structure, which gave the nonprofit board the ability to control the company’s operations and replace its leadership, even after it started a for-profit arm in 2019. At the time, many in the industry considered protecting ai from the forces of capitalism a top priority, which was to be enshrined in the corporate bylaws and bylaws.
But a lot has changed since 2019. Powerful ai is no longer just a thought experiment: it exists inside real products, like ChatGPT, that are used by millions of people every day. The world’s largest technology companies are racing to build even more powerful systems. And billions of dollars are being spent to build and deploy ai within companies, hoping to reduce labor costs and increase productivity.
The new board members are the type of business leaders one would expect to oversee such a project. Taylor, the new chairman, is an experienced Silicon Valley dealmaker who led the sale of Twitter to Elon Musk last year, when he was Twitter’s chairman. And Mr. Summers is the Ur-capitalist, a leading economist. who said who believes that technological change is a “net good” for society.
There may still be voices of caution on the reconstituted OpenAI board, or figures from the ai safety movement. But they will not have veto power or the ability to effectively shut down the company in an instant, as the old board of directors did. And your preferences will be balanced with those of others, such as those of the company’s executives and investors.
That’s a good thing if you’re Microsoft or any of the thousands of other companies that rely on OpenAI technology. More traditional governance means less risk of a sudden explosion or change that forces you to switch ai vendors quickly.
And perhaps what happened at OpenAI – a triumph of corporate interests over concerns about the future – was inevitable, given the growing importance of ai. A technology potentially capable of ushering in a Fourth Industrial Revolution was unlikely to be governed in the long term by those who wanted to stop it, not when there was so much money at stake.
Some traces of the old attitudes still remain in the ai industry. Anthropic, a rival company founded by a group of former OpenAI employees, has set itself up as a public benefit corporation, a legal structure meant to insulate it from market pressures. And an active open source ai movement has advocated for ai to remain free of corporate control.
But they are best seen as the last vestiges of the old era of ai, in which the people who created it viewed the technology with awe and terror, and sought to restrict its power through organizational governance.
Now, the utopians are in charge. Maximum forward speed.