When Elon Musk sued OpenAI and its CEO Sam Altman for breach of contract on Thursday, he weaponized claims from the startup's closest partner, Microsoft.
He repeatedly cited a controversial but highly influential paper written by Microsoft researchers and senior executives about the power of GPT-4, the groundbreaking OpenAI artificial intelligence system launched last March.
In the “Sparks of AGI” article, Microsoft's research lab said that, although it did not understand how, GPT-4 had shown “sparks” of “artificial general intelligence” or AGI, a machine that can do everything the computer does. human brain. can do.
It was a bold claim, and it came as the world's biggest technology companies were rushing to introduce ai into their own products.
Musk is turning the document against OpenAI, saying it shows how OpenAI backtracked on its commitments not to commercialize truly powerful products.
Microsoft and OpenAI declined to comment on the lawsuit. (The New York Times has sued both companies, alleging copyright infringement in the GPT-4 training.) Musk did not respond to a request for comment.
How did the research work come about?
A team of Microsoft researchers, led by Sébastien Bubeck, a 38-year-old French expat and former Princeton professor, began testing an initial version of GPT-4 in the fall of 2022, months before the technology was released to the public. Microsoft has committed $13 billion to OpenAI and negotiated exclusive access to the underlying technologies that power its ai systems.
As they chatted with the system, they were amazed. He wrote a complex mathematical proof in the form of a poem, generated computer code that could draw a unicorn, and explained the best way to stack a random, eclectic collection of household items. Dr. Bubeck and his research colleagues began to wonder if they were witnessing a new form of intelligence.
“I started out very skeptical, and that evolved into a feeling of frustration, annoyance and maybe even fear,” said Peter Lee, Microsoft's head of research. “You think: Where the hell is this coming from?”
What role does paper play in Musk's suit?
Musk argued that OpenAI had breached its contract because it had agreed not to market any products that its board of directors had deemed AGI.
“GPT-4 is an AGI algorithm,” Musk's lawyers wrote. They said that meant the system should never have been licensed to Microsoft.
Musk's complaint repeatedly cited Sparks' article to argue that GPT-4 was AGI. His lawyers said: “Microsoft's own scientists recognize that GPT-4 'achieves a form of general intelligence,'” and given “the breadth and depth of GPT-4.” 4, we believe it could reasonably be viewed as an early (although still incomplete) version of an artificial general intelligence (AGI) system.”
Since then, Microsoft's research has been cited by more than 1,500 other articles, according to Google Scholar. Is one of the most cited articles on ai in the last five years, according to Semantic Scholar.
It has also faced criticism from experts, including some within Microsoft, who were concerned that the 155-page document supporting the claim lacked rigor and fueled an ai marketing frenzy.
The paper was not peer-reviewed and its results cannot be reproduced because it was done on older versions of GPT-4 that were closely guarded at Microsoft and OpenAI. As the authors noted in the article, they did not use the GPT-4 version that was later released to the public, so anyone else replicating the experiments would get different results.
Some outside experts said it was unclear whether GPT-4 and similar systems exhibited behavior resembling human reasoning or common sense.
“When we see a complicated system or machine, we anthropomorphize it; “Everyone does that: people who work in this field and people who don't,” said Alison Gopnik, a professor at the University of California, Berkeley. “But thinking about this as a constant comparison between ai and humans, as some kind of game show competition, is just not the right way to think about it.”
Were there any other complaints?
In the introduction of the article, the authors initially defined “intelligence” quoting a 30-year-old man Wall Street Journal op-ed who, in defending a concept called the Bell Curve, claimed that “Jews and East Asians” were more likely to have higher IQs than “blacks and Hispanics.”
Dr. Lee, who is listed as an author on the paper, said in an interview last year that when researchers were looking to define AGI, “we took it from Wikipedia.” He said that when they later learned of the Bell Curve connection, “we were really mortified and made the change immediately.”
Eric Horvitz, chief scientist at Microsoft, who was a major contributor to the paper, wrote in an email that he personally took responsibility for inserting the reference, saying he had seen it referenced in a paper by a co-founder of Google's DeepMind artificial intelligence lab and had not noticed the racist references. When they found out about this, through a post on X, “we were horrified because we were simply looking for a reasonably broad definition of intelligence from psychologists,” he said.
Is this AGI or not?
When Microsoft researchers initially wrote the paper, they called it “First Contact with an AGI System.” But some members of the team, including Dr. Horvitz, disagreed with the characterization.
He later told The Times that they were not seeing something he “would call 'artificial general intelligence,' but rather glimpses through probes and sometimes surprisingly powerful results.”
GPT-4 is far from doing everything the human brain can do.
In a message sent to OpenAI employees on Friday afternoon that was seen by The Times, OpenAI chief strategy officer Jason Kwon explicitly said that GPT-4 was not AGI.
“It is capable of solving small tasks in many jobs, but the ratio of work done by a human to work done by GPT-4 in the economy is still astonishingly high,” he wrote. “Importantly, an AGI will be a highly autonomous system capable enough of devising novel solutions to long-standing challenges; GPT-4 cannot do that.”
Still, the article fueled claims by some researchers and experts that GPT-4 represented a significant step toward AGI and that companies like Microsoft and OpenAI would continue to improve the technology's reasoning abilities.
The ai field is still bitterly divided over how smart the technology is today or will be in the near term. If Musk gets his way, a jury can resolve the argument.