OpenAI is leading the race to develop ai as intelligent as a human. Yet employees continue to appear in the press and in podcasts to express grave concerns about safety at the $80 billion nonprofit research lab. technology/2024/07/12/openai-ai-safety-regulation-gpt4/”>comes from The Washington Postwhere an anonymous source claimed that OpenAI rushed security testing and celebrated its product before ensuring its safety.
“They planned the post-launch party before they knew if it was safe to do so,” an anonymous employee said. The Washington Post“We basically failed in the process.”
Security issues are a growing concern at OpenAI, and they seem to keep coming up. Current and former OpenAI employees recently signed an open letter demanding better security practices and transparency from the startup, shortly after its security team was disbanded following the departure of co-founder Ilya Sutskever. Jan Leike, a key researcher at OpenAI, resigned shortly after, stating in a post that “security culture and processes have taken a backseat to brilliant products” at the company.
Security is paramount for OpenAI letterwith a clause stating that OpenAI will help other organizations improve security if AGI is achieved at a competitor, rather than continuing to compete. It claims to be dedicated to solving the security problems inherent in such a large and complex system. OpenAI even keeps its proprietary models private, rather than open (which leads to punctures and lawsuits), for the sake of security. The warnings give the impression that security has been deprioritized despite being so important to the company's culture and structure.
It's clear that OpenAI is in a difficult situation, but PR efforts alone won't be enough to protect society.
“We are proud of our track record of providing the most capable and safe ai systems and believe in our scientific approach to addressing risk,” OpenAI spokesperson Taya Christianson said in a statement to The edge“Given the importance of this technology, it is essential that a rigorous debate takes place, and we will continue to collaborate with governments, civil society and other communities around the world to achieve our mission.”
According to OpenAI and others who study the emerging technology, the stakes around security are immense. “The current development of cutting-edge ai poses urgent and growing risks to national security,” a report states. ai-extinction-national-security-risks-report/”>Commissioned by the US State Department in March “The rise of advanced ai and artificial general intelligence (AGI) has the potential to destabilise global security in ways reminiscent of the introduction of nuclear weapons,” he said.
Alarm bells at OpenAI also rang following last year’s boardroom coup that briefly ousted CEO Sam Altman. The board said he was fired for failing to “consistently be truthful in his communications,” leading to an investigation that did not reassure staff.
OpenAI spokesperson Lindsey Held told the Mail The GPT-40 launch “did not cut corners” on safety, but another anonymous company representative acknowledged that the safety review timeline had been shortened to just one week. “We are rethinking the whole way we do it,” the anonymous representative told the agency. Mail“This was not the best way to do it.”
In the face of increasing controversies (remember the His Incident?), OpenAI has attempted to calm fears with some timely announcements. This week, Announced It is partnering with Los Alamos National Laboratory to explore how advanced ai models, such as GPT-4o, can safely aid in bioscience research, and in the same announcement repeatedly pointed to Los Alamos' own safety record. The next day, an anonymous spokesperson ai“>said Bloomberg that OpenAI created an internal scale to track the progress its large language models are making toward artificial general intelligence.
This week’s announcements focused on OpenAI’s security appear to be a defensive smokescreen in the face of mounting criticism of its security practices. It’s clear that OpenAI is on the hook, but PR efforts alone won’t be enough to protect society. What really matters is the potential impact on those beyond the Silicon Valley bubble if OpenAI continues to fail to develop ai with strict security protocols, as they claim internally: the average person has no say in the development of privatized ai, and yet no choice about the degree of protection they will receive from OpenAI’s creations.
“artificial intelligence tools can be revolutionary,” says FTC Chair Lina Khan ai-market”>said Bloomberg in November. But “as of today,” he said, there are concerns that “the critical inputs to these tools are controlled by a relatively small number of companies.”
If the numerous claims against its security protocols are true, this surely raises serious questions about OpenAI’s suitability for this role as steward of the IAG – a role the organisation has essentially assigned itself. Allowing a group in San Francisco to control a technology that could disrupt society is cause for concern, and there is an urgent demand, even within its own ranks, for transparency and security now more than ever.