It would be easy to dismiss Elon Musk's lawsuit against OpenAI as a case of sour grapes.
Musk sued OpenAI this week, accusing the company of violating the terms of its founding agreement and violating its founding principles. According to him, OpenAI was established as a non-profit organization that would build powerful artificial intelligence systems for the good of humanity and give away its research for free to the public. But Musk maintains that OpenAI broke that promise by starting a for-profit subsidiary that received billions of dollars in investments from Microsoft.
An OpenAI spokeswoman declined to comment on the lawsuit. In a memo sent to employees on Friday, Jason Kwon, the company's chief strategy officer, denied Musk's claims, saying: “We believe the claims in this lawsuit may arise from Elon's regret for not being involved with the company today. “, according to a copy of the memo I saw.
On one level, the lawsuit reeks of personal problems. Musk, who founded OpenAI in 2015 along with a group of other tech heavyweights and provided much of its initial funding but left in 2018 over disputes with leadership, resents being left out of the talks. about ai. His own ai projects haven't accomplished much. as much traction as ChatGPT, OpenAI's flagship chatbot. And Musk's feud with Sam Altman, CEO of OpenAI, has been well documented.
But amid all the animosity, there is one point worth highlighting, because it illustrates a paradox that is at the center of much of the current conversation about ai, and a place where OpenAI has actually been speaking to both sides of the aisle. mouth. insisting that their artificial intelligence systems are incredibly powerful and are nowhere near matching human intelligence.
The claim centers on a term known as AGI, or “artificial general intelligence.” Defining what constitutes AGI is notoriously complicated, although most people would agree that it means an ai system that can do most or all of the things the human brain can do. Mr. Altman has definite AGI as “the equivalent of an average human being that you could hire as a coworker,” while OpenAI itself defines adjusted gross income as “a highly autonomous system that outperforms humans in most economically valuable jobs.”
Most ai company leaders say building AGI is not only possible, but imminent. Demis Hassabis, CEO of Google DeepMind, told me in a recent podcast interview that he thought AGI could arrive as early as 2030. Altman has said that AGI may be only four or five years away.
Building AGI is OpenAI's explicit goal, and it has every reason to want to get there before anyone else. A true AGI would be an incredibly valuable resource, capable of automating enormous amounts of human work and generating large amounts of money for its creators. It's also the kind of bright, bold goal that investors love to fund and that helps ai labs recruit the best engineers and researchers.
But AGI could also be dangerous if it is able to outsmart humans, or if it becomes deceptive or misaligned with human values. The people who started OpenAI, including Musk, were concerned that an AGI would be too powerful to be owned by a single entity, and that if they were ever close to building one, they would need to change the control structure around it. to prevent it from doing harm or concentrating too much wealth and power in the hands of a single company.
That's why, when OpenAI partnered with Microsoft, it specifically gave the tech giant a license that applied only to “pre-AGI” technologies. (The New York Times has sued Microsoft and OpenAI over their use of copyrighted works.)
Under the terms of the agreement, if OpenAI ever built something that met the definition of AGI, as determined by OpenAI's nonprofit board, the Microsoft license would no longer apply and OpenAI board of directors could decide do whatever he wanted to ensure that OpenAI AGI benefits all of humanity. That could mean many things, including opening up the technology's source code or shutting it down entirely.
Most ai commentators believe that current cutting-edge ai models do not qualify as AGI because they lack sophisticated reasoning skills and frequently make stupid mistakes.
But in his legal filing, Musk makes an unusual argument. He maintains that OpenAI has already achieved AGI with its GPT-4 language model, which launched last year, and that company's future technology will qualify even more clearly as AGI
“Upon information and belief, GPT-4 is an AGI algorithm and is therefore expressly outside the scope of Microsoft's September 2020 exclusive license with OpenAI,” the complaint reads.
What Musk is arguing here is a bit complicated. Basically, he is saying that because he has achieved AGI with GPT-4, OpenAI can no longer license it to Microsoft, and that its board of directors should make the technology and research more freely available.
Their complaint cites the now-infamous “Sparks of AGI” paper by a Microsoft research team last year, which argued that GPT-4 demonstrated early signs of general intelligence, including signs of human-level reasoning.
But the complaint also notes that OpenAI's board of directors is unlikely to decide that its ai systems in fact qualify as AGI, because as soon as you do, you have to make big changes in the way you implement and benefit from the technology.
Furthermore, he points out that Microsoft (which now has a non-voting observer seat on OpenAI's board of directors, after an upheaval last year that resulted in Altman's temporary dismissal) has a strong incentive to deny that the technology of OpenAI qualifies as AGI, which would end its license to use that technology in its products and jeopardize potentially huge profits.
“Given Microsoft's enormous financial interest in keeping the door closed to the public, OpenAI, Inc.'s captured, contentious, and compliant new board will have every reason to delay determining that OpenAI has achieved AGI,” the complaint reads. . “By contrast, OpenAI's achievement of AGI, like 'Tomorrow' in 'Annie,' will always be a day away.”
given his recorded audio of questionable litigation, it's easy to question Musk's motives here. And as the head of a competing ai startup, it's not surprising that he wants to embroil OpenAI in complicated litigation. But his lawsuit points to a real conundrum for OpenAI.
Like its competitors, OpenAI desperately wants to be seen as a leader in the race to build AGI, and has a vested interest in convincing investors, business partners, and the public that its systems are improving at a dizzying pace.
But because of the terms of its deal with Microsoft, OpenAI investors and executives may not want to admit that their technology actually qualifies as AGI, as long as it does.
That has put Musk in the strange position of asking a jury to rule on what constitutes AGI and decide whether OpenAI's technology has met the threshold.
The lawsuit has also put OpenAI in the strange position of downplaying the capabilities of its own systems, while continuing to fuel anticipation that a breakthrough in AGI is just around the corner.
“GPT-4 is not an AGI,” OpenAI's Kwon wrote in the memo to employees on Friday. “It is capable of solving small tasks in many jobs, but the ratio of work done by a human to work done by GPT-4 in the economy is still astonishingly high.”
The personal dispute fueling Musk's complaint has led some to view it as a frivolous lawsuit: one commenter compared It's about “suing your ex because he remodeled the house after your divorce” – that will be quickly dismissed.
But even if dismissed, Musk's lawsuit points to important questions: Who decides when something qualifies as AGI? Are technology companies exaggerating or sandbagging (or both) when it comes to describing the capability of their systems? And what incentives lie behind various claims about how close or far from AGI we might be?
A lawsuit from a spiteful billionaire is probably not the right way to resolve those issues. But it's good to ask them, especially as ai progress continues to accelerate.