Sam Altman, CEO of OpenAI, the company that developed the controversial consumer-facing AI app ChatGPT, warned that the technology carries real dangers as it reshapes society.
Altman, 37, stressed that regulators and society must engage with technology to guard against potential negative consequences for humanity. “We have to be careful here”, Altman told ABC News on Thursday, adding: “I think people should be happy that we’re a little scared about this.
“I am particularly concerned that these models could be used for large-scale disinformation,” Altman said. “Now that they are getting better at writing computer code, [they] could be used for offensive cyber attacks.”
But despite the dangers, he said, it could also be “the greatest technology humanity has yet developed.”
The warning came as OpenAI released the latest version of its AI language model, GPT-4, less than four months after the original version was released and became the fastest growing consumer application in history.
In the interview, the AI engineer said that while the new version was “not perfect,” it had scored 90% on bar exams in the US and a near-perfect score on the math test. High school SATs. He could also write computer code in most programming languages, he said.
Fears about consumer-facing AI and AI in general center on humans being replaced by machines. But Altman pointed out that AI only works under the direction or input of humans.
“Wait for someone to give you a ticket,” he said. “This is a tool that is largely under human control.” But she said she was concerned about which humans had entry control.
“There will be other people who do not put some of the security limits that we put,” he added. “I think society has a limited amount of time to figure out how to react to that, how to regulate it, how to manage it.”
Many ChatGPT users have encountered a machine with defensive responses to the point of paranoia. In tests offered to the television news channel, GPT-4 performed a test in which he conjured up recipes from the contents of a fridge.
Tesla CEO Elon Musk, an early investor in OpenAI when it was still a nonprofit company, has repeatedly warned that AI, or AGI (artificial general intelligence), is more dangerous than a nuclear weapon.
Musk voiced concern that Microsoft, which hosts ChatGPT on its Bing search engine, had disbanded its ethics oversight division. “There is no regulatory oversight of AI, which is a *major* problem. I have been calling for AI safety regulation for over a decade!” Musk tweeted in December. This week, deranged muskalso on Twitter, which he owns: “What will we humans have left to do?”
On Thursday, Altman acknowledged that the latest version uses deductive reasoning rather than memorization, a process that can lead to strange answers.
“The thing I try to warn people about the most is what we call the ‘hallucination problem,’” Altman said. “The model will confidently state things as if they were completely made up facts.
“The correct way to think about the models we create is a reasoning engine, not a database of facts,” he added. While the technology could act as a database of facts, he said, “that’s not really what’s special about them: what we want them to do is something closer to the ability to reason, not memorize.”
What you get depends on what you put in, The Guardian recently warned in a ChatGPT analysis. “We deserve better from the tools we use, the media we consume, and the communities we live in, and we will only get what we deserve when we are able to fully participate in them.”