In 1950, Alan Turing, the talented British mathematician and codebreaker, published a academic article. His goal, she wrote, was to consider the question: “Can machines think?”
The answer is almost 12,000 words. But it ends succinctly: “We can only see a short distance ahead,” Turing wrote, “but we can see much to do there.”
More than seven decades later, that sentiment sums up the mood of many policymakers, researchers and tech leaders attending the ai Safety Summit in Britain on Wednesday, which Prime Minister Rishi Sunak hopes will position the country as a leader in the global race to harness and regulate artificial intelligence. intelligence.
On Wednesday morning, his government published a document called “ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023″ title=”” rel=”noopener noreferrer” target=”_blank”>The Bletchley Declaration”, signed by representatives of the 28 countries attending the event, including the US and China, warning of the dangers posed by more advanced “frontier” ai systems.
“There is the potential for serious, even catastrophic, harm, whether deliberate or not, resulting from the most significant capabilities of these ai models,” the statement said.
“Many of the risks arising from ai are inherently international in nature and therefore are best addressed through international cooperation. We resolve to work together inclusively to ensure responsible, trustworthy and human-centered ai.”
However, the document failed to establish specific policy objectives. A second meeting is scheduled to be held within six months in South Korea and a third in France within a year.
Governments have been quick to address the risks posed by rapidly evolving technology since last year’s launch of ChatGPT, a human-like chatbot that demonstrated how the latest models are advancing in powerful and unpredictable ways.
Future generations of ai systems could speed up disease diagnosis, help combat climate change and streamline manufacturing processes, but also present significant dangers in terms of job losses, disinformation and national security. ai-capabilities-risks-report.pdf” title=”” rel=”noopener noreferrer” target=”_blank”>A British government report last week He warned that advanced ai systems “can help bad actors conduct cyberattacks, run disinformation campaigns, and design biological or chemical weapons.”
Sunak promoted this week’s event, which brings together governments, businesses, researchers and civil society groups, as an opportunity to start developing global security standards.
The two-day summit in Britain will be held at Bletchley Park, a rural estate 50 miles north of London, where Turing helped crack the Enigma code used by the Nazis during World War II. Considered one of the birthplaces of modern computing, the location is a conscious nod to the Prime Minister’s hopes that Britain could be at the center of another world-leading initiative.
Bletchley is “evocative because it captures a very decisive moment in time, where great leadership was required from the government, but also a moment when computing was front and center,” said Ian Hogarth, an entrepreneur and investor. technology designated by Sunak. to lead the government tech-entrepreneur-ian-hogarth-to-lead-uks-ai-foundation-model-taskforce” title=”” rel=”noopener noreferrer” target=”_blank”>ai Risk Task Forceand who helped organize the summit. “We need to come together and agree on a smart way forward.”
With Elon Musk and other tech executives in the audience, King Charles III gave a video speech at the opening session, recorded at Buckingham Palace before leaving for a state visit to Kenya this week. “We are witnessing one of the greatest technological leaps in the history of human endeavor,” he said. “There is a clear imperative to ensure this rapidly evolving technology remains safe and secure.”
Vice President Kamala Harris and Secretary of Commerce Gina Raimondo participated in the meetings on behalf of the United States.
Wu Zhaohui, China’s vice minister of science and technology, told attendees that Beijing was willing to “enhance dialogue and communication” with other countries on ai safety. China is developing its own initiative for ai governance, he said, adding that the technology is “uncertain, inexplicable and lacks transparency.”
In a speech on Friday, Sunak addressed criticism he had received from China hawks over the attendance of a delegation from Beijing. “ai-26-october-2023″ title=”” rel=”noopener noreferrer” target=”_blank”>Yes, we have invited China.“, he said. “I know there are some who will say they should have been excluded. But there can be no serious strategy for ai without at least trying to involve all the major ai powers in the world.”
With the development of leading ai systems concentrated in the United States and a small number of other countries, some attendees said regulations must take into account the technology‘s impact globally. Rajeev Chandrasekhar, technology minister representing India, said policies should be set by a “coalition of nations rather than just one country against two countries.”
“By allowing innovation to get ahead of regulation, we open ourselves up to the toxicity, misinformation and militarization we see on the Internet today, represented by social media,” he said.
The conference was attended by executives from leading technology and artificial intelligence companies, including Anthropic, Google DeepMind, IBM, Meta, Microsoft, Nvidia, OpenAI and Tencent. Several civil society groups also sent representatives, including Britain’s Ada Lovelace Institute and the Algorithmic Justice League, a nonprofit in Massachusetts.
In a surprise move, Mr Sunak Announced on Monday that he would participate in a live interview with Musk on his X social media platform after the summit ends on Thursday.
Some analysts argue that the conference will have more symbolism than substance, with several key political leaders absent, including President Biden, President Emmanuel Macron of France and Chancellor Olaf Scholz of Germany.
And many governments are moving forward with their own laws and regulations. Biden this week announced an executive order requiring artificial intelligence companies to assess national security risks before releasing their technology to the public. The European Union’s ai Law, which could be finalized within weeks, represents a far-reaching attempt to protect citizens from harm. China is also cracking down on the use of ai, including censoring chatbots.
Britain, home to many universities where ai research is carried out, has taken a more hands-off approach. The government believes existing laws and regulations are sufficient for now, while announcing a new ai Safety Institute that will evaluate and test new models.
Mr Hogarth, whose team has negotiated early access To several big ai companies’ models to investigate its security, he said he believed Britain could play an important role in working out how governments could “capture the benefits of these technologies and put barriers around them”.
In his speech last week, Sunak said Britain’s approach to the potential risks of technology is “not to rush to regulate.”
“How can we write laws that make sense for something we don’t fully understand yet?” he said.