More than 1,000 artificial intelligence experts, researchers and funders have joined a call for an immediate pause on the creation of “giant” AIs for at least six months, so that the capabilities and dangers of systems like GPT-4 can be studied. and appropriately mitigated. .
The demand is made in an open letter signed by major AI players, including: Elon Musk, who co-founded OpenAI, the research lab responsible for ChatGPT and GPT-4; Emad Mostaque, who founded London-based Stability AI; and Steve Wozniak, co-founder of Apple.
Its signatories also include engineers from Amazon, DeepMind, Google, Meta and Microsoft, as well as academics such as cognitive scientist Gary Marcus.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict, or control,” the letter reads. .
“Powerful artificial intelligence systems should be developed only once we are sure that their effects will be positive and their risks are manageable.”
The authors, coordinated by the “long-term” think tank Future of Life Institute, cite OpenAI’s own co-founder, Sam Altman, to justify their calls.
In a February post, Altman wrote: “At some point, it may be important to get independent review before starting to train future systems, and for more advanced efforts to agree to limit the growth rate of the computation used to create new models. .”
The letter continued: “We agree. That point is now.”
If researchers don’t voluntarily pause their work on AI models more powerful than GPT-4, the charter’s benchmark for “giant” models, then “governments should step in,” the authors say.
“This does not mean a pause in AI development in general, just a step back in the perilous race towards ever larger unpredictable black box models with emerging capabilities,” they add.
Since the release of GPT-4, OpenAI has been adding capabilities to the AI system with “plugins,” giving you the ability to search the open web for data, plan vacations, and even order groceries. But the company has to deal with “overcapacity”: the problem that its own systems are more powerful than it realizes at launch.
As the researchers experiment with GPT-4 over the coming weeks and months, they are likely to discover new ways to “boost” the system that improve its ability to solve difficult problems.
A recent discovery was that AI is noticeably more accurate in answering questions if it is first told to do so “in the style of an informed expert.”
The call for strict regulation contrasts with the UK government’s AI regulation white paper, published on Wednesday, which contains no new powers at all.
Instead, the government says, the focus is on coordinating existing regulators such as the Competition and Markets Authority and the Health and Safety Executive, offering five “principles” through which they should think about AI.
“Our new approach is based on sound principles so that people can trust companies to unleash this technology of tomorrow,” said Secretary for Science, Innovation and Technology Michelle Donelan.
The Ada Lovelace Institute was among those who criticized the ad. “The UK approach has significant gaps, which could leave harm unaddressed, and is underpowered relative to the urgency and scale of the challenge,” said Michael Birtwistle, who heads data and AI law and policy at the UK. Investigation Institute.
“The government timeline of a year or more for rollout will leave the risks unaddressed just as AI systems are being integrated at the pace of our daily lives, from search engines to office suite software.”
Labor joined the criticism, with shadow culture secretary Lucy Powell accusing the government of “cheating its end of the bargain”.
She said: “This regulation will take months, if not years, to take effect. Meanwhile, ChatGPT, Google’s Bard, and many others are making AI a regular part of our everyday lives.
“The government is at risk of reinforcing the gaps in our existing regulatory system and making the system enormously complex for businesses and citizens to navigate, at the same time that they are weakening those foundations through their next data bill.”