Over the past few decades, we have seen how the internet and smartphones rapidly transformed our lives. Artificial intelligence is now poised to do the same, but some experts worry that the current pace of its development will cause harm.
An open letter signed by hundreds of leaders in the world of technology, including Apple co-founder Steve Wozniak and Elon Musk, has proposed stopping the development of any AI beyond OpenAI’s GPT-4 capabilities.
They expressed concern that AI labs were locked in an “out-of-control race to develop and deploy increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict, or control.”
The Guardian’s UK technology editor, alex hernsays hanna moore Where do these worries come from?
“You’ve already seen the rapid, rapid increase in capabilities of this cutting-edge artificial intelligence, and that’s with external humans having to do the actual work,” Hern tells Moore. “One fear is that once you end up with an AI that can significantly improve the speed of AI research…you go from something that is GPT-4 to super smart in a matter of years or even months.”
<!–[if IE 9]>
<!–[if IE 9]><![endif]–>
Photograph: Lionel Bonaventure/AFP/Getty Images
support the guardian
The Guardian is editorially independent. And we want to keep our journalism open and accessible to everyone. But we increasingly need our readers to finance our work.
support the guardian