When European lawmakers reached a tentative agreement on landmark ai standards last week, they had reason to celebrate.
EU ai Law is reached ai“>a long-awaited climax on Friday after not only two years of extensive discussion but a three-day “marathon” debate between the European Commission, the European Parliament and the EU member states to smooth over differences. The sleepless nights have retired. The containers were full of leftover coffee, energy drinks and sugary snacks. It was the kind of atmosphere one would expect from students preparing for final exams, not from lawmakers working on legislation that could set a blueprint for global ai regulation. The chaos was largely due to two contentious issues that threatened to derail the entire negotiation: facial recognition and powerful “grassroots” models.
When the ai Act was first proposed in April 2021, aimed to combat “new risks or negative consequences for individuals or society” that artificial intelligence could cause. The law focused on tools already being implemented in fields such as policing, labor recruitment and education. But while the overall intent of the bill did not change, artificial intelligence technology did – and quickly. The proposed rules were not well-equipped to handle general-purpose systems widely called base models, such as the technology underlying OpenAI's explosively popular ChatGPT, which launched in November 2022.
Much of the last-minute delay was due to policymakers scrambling to ensure that these new ai technologies, as well as future ones not yet developed, fell within the scope of the legislation. Instead of simply regulating every area in which they could appear (a list that includes cars, toys, medical devices, and much more), the law used a ai“>tiering system that classifies ai applications based on risk. “High risk” ai systems that could affect security or fundamental rights were subjected to the most onerous regulatory restrictions. Additionally, general purpose ai (GPAI) systems, such as OpenAI's GPT models, faced additional regulations. There was a lot at stake in that designation and, consequently, the debate over it was fierce.
“At one point, it looked like tensions over how to regulate the GPAI could derail the entire negotiation process,” says Daniel Leufer, senior policy analyst at Access Now, a digital human rights organization. “There was a big push by France, Germany and Italy to completely exclude these systems from any obligations under the ai Act.”
France, Germany and Italy sought last-minute compromises for basic ai models
These countries, three of the largest economies in Europe, ai-artificial-intelligence-bill/”>Evasion negotiations began in November over concerns that strict restrictions reduce innovation and harm startups developing fundamental ai models in their jurisdictions. Those concerns clashed with those of other EU lawmakers who have sought to introduce strict regulations on how they can be used and developed. This last-minute hurdle in the ai Act negotiations contributed to delays in reaching an agreement, but it was not the only sticking point.
In fact, it appears that a considerable amount of the current legislation remained unresolved even days before the provisional agreement was reached. At a meeting between European communications and transport ministers On December 5thGerman Digital Minister Volker Wissing said that “ai regulation as a whole is not yet fully mature.”
GPAI systems faced requirements such as the disclosure of training data, power consumption and security incidents, as well as being subject to additional risk assessments. Unsurprisingly, OpenAI (a company known for refusing to reveal details about its work), Googleand ai-regulation-europe/”>Microsoft put pressure on the EU to relax stricter regulations. Those attempts apparently paid off. While lawmakers had previously considered categorizing all Although GPAIs are “high risk,” the agreement reached last week subjects them to a two-tier system that allows companies some leeway to avoid the most severe restrictions of the ai Act. This too, probably contributed to last-minute delays last week in Brussels.
“In the end, we got some minimal transparency obligations for GPAI systems, with some additional requirements for so-called 'high-impact' GPAI systems that pose 'systemic risk,'” Leufer says, but there is still a “long battle.” forward to ensure that the supervision and implementation of such measures work properly.”
There is also a much more difficult category: systems with an “unacceptable” level of risk, which the ai Act prohibits entirely. And in the negotiations until the last hours, the member states were still training whether this should include some of its most controversial high-tech surveillance tools.
The complete ban on facial recognition artificial intelligence systems was strongly questioned
Initially the European Parliament He gave the green light to a complete ban on biometric systems for mass public surveillance. in July. That included creating facial recognition databases by indiscriminately scraping data from social media or CCTV footage; predictive policing systems based on location and past behavior; and biometric categorization based on sensitive characteristics such as ethnicity, religion, race, gender, citizenship and political affiliation. It also banned remote biometric identification, both real-time and retroactive, with the sole exception of allowing law enforcement to use delayed recognition systems to prosecute “serious crimes” after court approval. The European Commission and EU Member States ai-act”>he challenged it and won concessions – to some critics' dismay.
The draft approved Friday includes exceptions that allow limited use of automated facial recognition, such as cases where identification occurs after a significant delay. It may also be approved for specific law enforcement use cases involving threats to national security, although only under certain (currently unspecified) conditions. This likely appeased bloc members like France, which has pushed to use ai-assisted surveillance to monitor things like terrorism and the nextai/”> 2024 Olympic Games in Parisbut human rights organizations such as Amnesty International have been more critical of the decision.
“It is disappointing to see the European Parliament succumbing to pressure from Member States to deviate from its original position,” ai-act-sets-a-devastating-global-precedent/”>Mayor Akobian said., ai regulation advocacy advisor at Amnesty International. “Although its defenders argue that the project only allows limited use of facial recognition and is subject to safeguards, Amnesty's investigation into New York City, Occupied Palestinian Territories, Hyderabadand elsewhere demonstrates that no safeguards can prevent the human rights harms inflicted by facial recognition, which is why an outright ban is needed.”
To complicate matters further, we cannot delve into what specific Concessions were made, because the full text of the approved ai Law will not be available for several weeks. Technically, it probably doesn't officially exist within the EU yet. absolutely. Commitments for these agreements are often based on principles rather than exact terms, he says. Michael Veale, associate professor of digital rights and regulation at UCL Law School. That means it could take some time for lawmakers to refine the legal language.
Furthermore, as only a provisional agreement was reached, the final legislation is still subject to change. There is no official timeline available, but policy experts seem fairly unanimous with their estimates: the ai Act is expected to become law in mid-2024, following its publication in the EU's official journal, and all Provisions will come into effect gradually over the next two years.
That gives policymakers some time to determine exactly how these rules will be enforced. ai companies can also use that time to ensure their products and services comply with the rules when the provisions go into effect. Ultimately, that means we may not see everything regulated within the ai Act until mid-2026. In the years of ai development, that's a long time — so, by then, we may have a whole new set of issues to address.