Dario Amodei, CEO of prominent ai startup Anthropic, told Congress last year that new ai technology could soon help unskilled but malicious people. technology/us-senators-express-bipartisan-alarm-about-ai-focusing-biological-attack-2023-07-25/” title=”” rel=”noopener noreferrer” target=”_blank”>create large scale biological attackssuch as the release of viruses or toxic substances that cause widespread illness and death.
Senators from both parties were alarmed, while ai researchers in industry and academia debated how serious the threat could be.
Now, more than 90 biologists and other scientists who specialize in artificial intelligence technologies used to engineer new proteins (the microscopic mechanisms that drive all creations in biology) have ai/” title=”” rel=”noopener noreferrer” target=”_blank”>signed an agreement which seeks to ensure that its ai-assisted research advances without exposing the world to serious harm.
The biologists, who include Nobel laureate Frances Arnold and represent laboratories in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and drugs.
“As scientists involved in this work, we believe that the benefits of current ai technologies for protein design far outweigh the potential for harm, and we would like to ensure that our research continues to be beneficial to everyone in the future,” reads in the agreement.
The agreement does not seek to suppress the development or distribution of artificial intelligence technologies. Instead, biologists aim to regulate the use of the equipment necessary to make new genetic material.
This DNA manufacturing equipment is ultimately what enables the development of biological weapons, said David Baker, director of the Protein Design Institute at the University of Washington, who helped push the deal.
“Protein engineering is only the first step in producing synthetic proteins,” he said in an interview. “Then you have to synthesize the DNA and translate the computer design into the real world, and that's the appropriate place to regulate.”
The agreement is one of many efforts to weigh the risks of ai against the potential benefits. As some experts warn that ai technologies could help spread disinformation, replace jobs at an unusual rate, and perhaps even destroy humanity, technology companies, academic labs, regulators, and lawmakers are struggling to understand these. risks and find ways to address them.
Dr. Amodei's company, Anthropic, builds large language models, or LLMs, the new type of technology powering online chatbots. When he testified before Congress, he argued that the technology could soon help attackers build new biological weapons.
But he acknowledged that was not possible today. Anthropic had recently carried out a ai-safety” title=”” rel=”noopener noreferrer” target=”_blank”>detailed study demonstrating that if someone was trying to acquire or design biological weapons, LLMs were slightly more useful than a normal Internet search engine.
Dr. Amodei and others worry that as companies improve LLMs and combine them with other technologies, a serious threat will emerge. He told Congress this was only two or three years away.
OpenAI, creator of the ChatGPT online chatbot, later conducted a similar study that showed that LLMs were not significantly more dangerous than search engines. Aleksander Mądry, a computer science professor at the Massachusetts Institute of technology and head of preparation for OpenAI, said he hoped researchers would continue to improve these systems, but that he had not yet seen any evidence that they could create new biological weapons. .
Today's LLMs are created by analyzing enormous amounts of digital text selected from the Internet. This means they regurgitate or recombine what is already available online, including existing information on biological attacks. (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement during this process.)
But in an effort to speed up the development of new drugs, vaccines and other useful biological materials, researchers are beginning to build similar ai systems that can generate new protein designs. Biologists say such technology could also help attackers design biological weapons, but they note that actually building the weapons would require a multimillion-dollar laboratory, including DNA manufacturing equipment.
“There is some risk that doesn't require millions of dollars in infrastructure, but those risks have been around for a while and are not related to ai,” said Andrew White, co-founder of the nonprofit Future House and one of the biologists. who signed the agreement.
Biologists called for the development of safety measures that would prevent DNA manufacturing equipment from being used with harmful materials, although it is unclear how such measures would work. They also asked for security reviews of new ai models before launching them.
They did not argue that technologies should be repressed.
“These technologies should not be in the hands of a small number of people or organizations,” said Rama Ranganathan, a professor of biochemistry and molecular biology at the University of Chicago, who also signed the agreement. “The community of scientists should be able to freely explore and contribute to them.”