OpenAI said Tuesday that it has begun training a new flagship artificial intelligence model that would succeed the GPT-4 technology that powers its popular online chatbot, ChatGPT.
The San Francisco startup, which is one of the world's leading ai companies, said in a blog post that it expects the new model to bring “the next level of capabilities” as it strives to build “artificial general intelligence.” or AGI. , a machine that can do anything the human brain can do. The new model would be an engine for artificial intelligence products, including chatbots, digital assistants similar to Apple's Siri, search engines and image generators.
OpenAI also said it was creating a new Security Committee to explore how it should handle risks posed by the new model and future technologies.
“While we are proud to build and launch models that are industry-leading in both capabilities and safety, we welcome robust debate at this important time,” the company said.
OpenAI aims to advance ai technology faster than its rivals, while appeasing critics who say the technology is becoming increasingly dangerous, helping to spread misinformation, replace jobs and even threaten humanity. . Experts disagree on when tech companies will achieve artificial general intelligence, but companies like OpenAI, Google, Meta, and Microsoft have been steadily increasing the power of ai technologies for more than a decade, demonstrating a notable leap roughly every two or three years.
OpenAI's GPT-4, which launched in March 2023, enables chatbots and other software applications to answer questions, write emails, generate term papers, and analyze data. An updated version of the technology, which was unveiled this month and is not yet widely available, can also generate images and respond to questions and commands with a highly conversational voice.
Days after OpenAI showed off the updated version, called GPT-4o, actress Scarlett Johansson said it used a voice that sounded “eerily similar to mine.” She said that she had rejected OpenAI CEO Sam Altman's efforts to license her voice for the product and that she had hired a lawyer and asked OpenAI to stop using the voice. The company said the voice was not Ms. Johansson's.
Technologies like GPT-4o learn their skills by analyzing large amounts of digital data, including sounds, photographs, videos, Wikipedia articles, books and news. The New York Times sued OpenAI and Microsoft in December, alleging copyright infringement of news content related to artificial intelligence systems.
Digitally “training” ai models can take months or even years. Once training is complete, ai companies typically spend several more months testing the technology and fine-tuning it for public use.
That could mean OpenAI's next model won't arrive for nine months to a year or more.
As OpenAI trains its new model, its new Security committee will work to refine policies and processes to safeguard the technology, the company said. The committee includes Mr. Altman, as well as OpenAI board members Bret Taylor, Adam D'Angelo and Nicole Seligman. The company said the new policies could take effect in late summer or fall.
Earlier this month, OpenAI said that Ilya Sutskever, co-founder and one of the leaders of its security efforts, would be leaving the company. This raised concerns that OpenAI was not dealing sufficiently with the dangers posed by ai.
Dr. Sutskever had joined three other board members in November in ousting Mr. Altman from OpenAI, saying he could no longer be trusted with the company's plan to create artificial general intelligence for the good of humanity. After a lobbying campaign by Altman's allies, he was reinstated five days later and has since reasserted control of the company.
Dr. Sutskever led what OpenAI called its Superalignment team, which explored ways to ensure future ai models did no harm. Like others in the field, he became increasingly concerned that ai posed a threat to humanity.
Jan Leike, who led the Superalignment team with Dr. Sutskever, resigned from the company this month, leaving the team's future in doubt.
OpenAI has integrated its long-term security research into its broader efforts to ensure its technologies are secure. That work will be led by John Schulman, another co-founder, who previously led the team that created ChatGPT. The new safety committee will oversee Dr. Schulman's research and provide guidance on how the company will address technological risks.