The world is caught in a race and competition for ai dominance, but today, some of them seemed to come together to say that they would prefer to collaborate when it comes to mitigating risk.
Speaking at the ai Safety Summit in Bletchley Park, England, UK technology Minister Michelle Donelan announced a new policy document, called ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023″ target=”_blank” rel=”noopener”>Bletchley Declaration, which aims to reach a global consensus on how to address the risks posed by ai now and in the future as it develops. He also said the summit will become a regular, recurring event: another meeting is planned to be held in Korea within six months, he said; and one more in France six months later.
Like the tone of the conference itself, the document released today is of a relatively high level.
“To realize this, we affirm that, for the good of all, <abbr title="artificial intelligence“>ai “It must be designed, developed, implemented and used in a safe, human-centered, reliable and responsible manner,” the document states. It also draws attention specifically to the type of large language models that companies like OpenAI, Meta, and Google are developing and the specific threats they could pose through misuse.
“Particular security risks arise at the ‘border’ of <abbr title="artificial intelligence“>aiunderstood as those of general purpose highly capable <abbr title="artificial intelligence“>ai models, including basic models, that could perform a wide variety of tasks, as well as relevant specific narrow tasks. <abbr title="artificial intelligence“>ai that could exhibit capabilities that cause harm, which equal or exceed the capabilities present in today’s most advanced models,” he noted.
At the same time, some specific events occurred.
US Commerce Secretary Gina Raimondo announced a new ai safety institute that would be located within the Department of Commerce and specifically under the department’s National Institute of Standards and technology (NIST).
The goal, he said, would be for this organization to work closely with other ai safety groups created by other governments, outlining plans for a ai-26-october-2023″ target=”_blank” rel=”noopener”>Security Institute which the UK also plans to establish.
“We have to get to work and among our institutes we have to get to work to (achieve) policy alignment around the world,” Raimondo said.
Political leaders at today’s opening plenary session included not only representatives of the world’s largest economies, but also several who spoke on behalf of developing countries, collectively the Global South.
The poster included Wu Zhaohui, China’s Vice Minister of Science and technology; Vera Jourova, Vice President of Values and Transparency of the European Commission; Rajeev Chandrasekhar, Minister of State for Electronics and Information technology of India; Omar Sultan al Olama, UAE Minister of State for artificial intelligence; and Bosun Tijani, Nigeria’s minister of technology. Collectively, they spoke of inclusion and accountability, but with so many questions about how to implement them, the proof of their dedication remains to be seen.
“I am concerned that a race to create powerful machines will outpace our ability to safeguard society,” said Ian Hogarth, founder, investor and engineer, who currently chairs the UK government’s working group on fundamental models of ai, which has had a great role to play in organizing this conference. “No one in this room knows for sure how or if these next leaps in computing power will translate into benefits or harms. We’ve been trying to ground (risk concerns) in empiricism and rigor (but) our current lack of understanding… is quite surprising.
“History will judge our ability to meet this challenge. “He will judge us by what we do and say over the next two days.”