European Union policymakers agreed on Friday on a sweeping new law to regulate artificial intelligence, one of the world's first comprehensive attempts to limit the use of a rapidly evolving technology that has far-reaching social and economic implications.
The law, called the ai Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while also trying to protect against its potential risks, such as the automation of jobs, the spread of misinformation online and endanger national security. The law still needs to go through some final steps for approval, but the political agreement means its key guidelines have been established.
European policymakers focused on the riskiest uses of ai by companies and governments, including those for law enforcement and the operation of crucial services such as water and energy. Creators of the largest general-purpose ai systems, such as those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by ai, according to EU officials and earlier drafts of the law.
The use of facial recognition software by police and governments would be restricted outside of certain national security exemptions. Companies that violate regulations could face fines of up to 7 percent of global sales.
“Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard-setter,” said Thierry Breton, the European commissioner who helped negotiate the dealhe said in a statement.
However, even as the law was hailed as a regulatory breakthrough, questions remained about how effective it would be. Many aspects of the policy were not expected to come into effect for 12 to 24 months, a considerable period of time for ai development. And until the last minute of the negotiations, policymakers and countries were fighting over their language and how to balance encouraging innovation with the need to safeguard against potential harm.
The agreement reached in Brussels required three days of negotiations, including a 22-hour initial session that began Wednesday afternoon and lasted until Thursday. The final agreement was not immediately public as behind-the-scenes talks were expected to continue to complete technical details, which could delay final approval. Votes It should be carried out in Parliament and in the European Council, which brings together representatives of the 27 countries of the union.
ai regulation gained urgency after last year's launch of ChatGPT, which became a global sensation by demonstrating the advancing capabilities of ai. In the United States, the Biden administration recently issued an executive order focused in part on the effects of ai on national security. Britain, Japan and other nations have taken a more hands-off approach, while China has imposed some restrictions on data use and recommendation algorithms.
At stake are ai-the-next-productivity-frontier#key-insights” title=”” rel=”noopener noreferrer” target=”_blank”>billions of dollars in estimated value as ai is predicted to reshape the global economy. “The technological domain precedes the economic domain and the political domain,” said Jean-Noël Barrot, France's digital minister. saying this week.
Europe has been one of the regions that has made the most progress in ai regulation, having begun work on what would become the ai Law in 2018. In recent years, EU leaders have attempted to bring a new level of supervision over technology, similar to health regulation. healthcare or banking sector. The bloc has already enacted far-reaching laws related to data privacy, competition and content moderation.
A first draft of the ai Law was published in 2021, but policymakers were forced to rewrite the law as technological advances emerged. The initial version made no mention of general-purpose ai models like those powering ChatGPT.
Policymakers agreed on what they called a “risk-based approach” to regulating ai, where a defined set of applications faces the most oversight and restrictions. Companies that make ai tools that pose the greatest potential harm to people and society, such as in hiring and education, would have to provide regulators with evidence of risk assessments, breakdowns of the data that was used to train systems and guarantees that the software did. not cause harm such as perpetuating racial prejudice. Human supervision would also be required in the creation and implementation of the systems.
Some practices, such as the indiscriminate scraping of images from the Internet to create a facial recognition database, would be banned entirely.
The debate over the European Union was contentious, a sign of how ai has baffled lawmakers. EU officials were divided over how deeply they would regulate new ai systems for fear of hurting European startups trying to catch up to American companies like Google and OpenAI.
The law added requirements for makers of the largest ai models to disclose information about how their systems work and assess “systemic risk,” Breton said.
The new regulations will be closely followed globally. They will affect not only major ai developers such as Google, Meta, Microsoft and OpenAI, but also other companies that are expected to use the technology in areas such as education, healthcare and banking. Governments are also turning more to ai in criminal justice and the allocation of public benefits.
The application of the law remains unclear. The ai Act will involve regulators from 27 countries and will require the hiring of new experts at a time when government budgets are tight. Legal challenges are likely as companies test the new rules in court. Previous EU legislation, including the landmark digital privacy law known as the General Data Protection Regulation, has been criticized for its uneven application.
“The EU's regulatory prowess is in question,” said Kris Shrishak, a member of the Irish Council for Civil Liberties, who has advised European lawmakers on the ai Act. “Without strict enforcement, this agreement will be meaningless.”