A letter co-signed by Elon Musk and thousands of others demanding a pause on artificial intelligence research has created a firestorm, after researchers cited in the letter condemned the use of their work, some signatories were revealed to be fake and others withdrew. in your support.
On March 22, more than 1,800 signatories, including Musk, cognitive scientist Gary Marcus, and Apple co-founder Steve Wozniak, called for a six-month pause on development of “more powerful” systems than GPT-4. Engineers from Amazon, DeepMind, Google, Meta, and Microsoft also lent their support.
Developed by OpenAI, a company co-founded by Musk and now backed by Microsoft, GPT-4 has developed the ability to hold human-like conversation, compose songs, and summarize long documents. Such “human-competitive intelligence” AI systems pose profound risks to humanity, the letter states.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared security protocols for advanced AI design and development that are rigorously audited and monitored by independent third-party experts,” the letter said. .
The Future of Life Institute, the think tank that coordinated the effort, He cited 12 pieces of research by experts, including university academics, as well as current and former employees of OpenAI, Google and its DeepMind subsidiary. But four experts cited in the letter have raised concerns that their research has been used to make such claims.
When initially released, the letter was missing verification protocols for signing and accumulating signatures from people who didn’t actually sign it, including Xi Jinping and Meta AI Chief Scientist Yann LeCun, who cleared up on Twitter did not support it.
Critics have accused the Future of Life Institute (FLI), which is primarily founded by the Musk Foundation, of prioritizing imaginary doomsday scenarios over more immediate concerns about AI, such as programming racist or sexist biases into machines.
Among the research cited was “On the Dangers of Stochastic Parrots,” a well-known paper Co-authored by Margaret Mitchell, who previously oversaw AI ethics research at Google. Mitchell, now chief ethical scientist at artificial intelligence firm Hugging Face, criticized the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.
“By treating many questionable ideas as fact, the letter affirms a set of priorities and a narrative on AI that benefits FLI supporters,” he said. “Ignoring active damage right now is a privilege some of us don’t have.”
His co-authors Timnit Gebru and Emily M Bender criticized the letter on Twitter, with the latter calling some of its claims “unhinged”. Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with the mention of her work in the letter. Last year she co-authored a research paper argue that the widespread use of AI already posed serious risks.
Their research argued that the current use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.
She told Reuters: “AI doesn’t need to reach human-level intelligence to exacerbate those risks.”
“There are non-existential risks that are very, very important, but they don’t get the same kind of attention on a Hollywood level.”
Asked to comment on the criticism, FLI President Max Tegmark said that both the short- and long-term risks of AI must be taken seriously. “If we quote someone, it just means that we affirm that they support that sentence. It does not mean that they are endorsing the letter, or that we endorse everything they think,” he told Reuters.