A open letter signed by technology leaders and prominent AI researchers has called on AI labs and companies to “immediately halt” their work. Signatories such as Steve Wozniak and Elon Musk agree that the risks warrant a minimum six-month break from technology production beyond GPT-4 to enjoy existing AI systems, allow people to adjust, and ensure that benefit everyone. The letter adds that care and foresight are necessary to ensure the security of AI systems, but are being ignored.
The reference to GPT-4, an OpenAI model that can respond with text to written or visual messages, comes as companies rush to build complex chat systems that use the technology. Microsoft, for example, recently confirmed that its revamped Bing search engine has been running on the GPT-4 model for more than seven weeks, while Google recently introduced Bard, its own LaMDA-powered generative AI system. Concern around AI has long circulated, but the apparent race to implement the most advanced AI technology first has raised more pressing concerns.
“Unfortunately, this level of planning and management isn’t happening, despite the fact that in recent months AI labs have been embroiled in an out-of-control race to develop and deploy ever more powerful digital minds than anyone, not even their creators, can reliably understand, predict or control,” the letter states.
The letter in question was published by the Future Life Institute (FLI), an organization dedicated to minimizing the risks and misuse of new technologies. Musk previously donated $10 million to FLI for use in studies on the safety of AI. In addition to him and Wozniak, the signatories include a host of AI world leaders, including Center for AI and Digital Policy president Marc Rotenberg, MIT physicist and Future of Life Institute president Max Tegmark, and the author Yuval Noah Harari. Harari also co-wrote a opinion piece in the New York Times last week warning of the risks of AI, joined by the founders of the Center for Humane Technology and fellow signatories, Tristan Harris and Aza Raskin.
This call feels like the next step in a class. survey 2022 of more than 700 machine learning researchers, in which almost half of the participants stated that there is a 10 percent chance of an “extremely bad outcome” from AI, including human extinction. When asked about the safety of AI research, 68 percent of researchers said more or much more should be done.
Anyone who shares concerns about the speed and security of AI production can add their name to the letter. However, new names are not necessarily verified, so any notable additions after the initial posting are potentially fake.
All Engadget Recommended products are curated by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publication.