Until Sunday in the European Union, block regulators can prohibit the use of ai systems that they consider to raise “unacceptable risk” or damage.
February 2 is the first deadline for compliance for the EU's law, the comprehensive regulatory framework of the European Parliament finally approved last March after years of development. The act officially entered into force on August 1; What follows now is the first of the fulfillment.
The details are established in article 5But in general terms, the act is designed to cover a myriad of use cases in which ai can appear and interact with people, from consumer applications to physical environments.
Under him THE BLOC APPROACHThere are four broad risk levels: (1) the minimum risk (for example, email filters) will not face regulatory supervision; (2) The limited risk, which includes customer service chatbots, will have a regulatory supervision of light touch; (3) High risk – ai for medical care recommendations is an example – will face great regulatory supervision; and (4) unacceptable risk applications, the focus of compliance requirements this month will be completely prohibited.
Some of the unacceptable activities include:
- The ai used for social score (for example, the construction of risk profiles based on the behavior of a person).
- ai who manipulates the decisions of a sublimine or misleading person.
- ai that exploits vulnerabilities such as age, disability or socioeconomic state.
- ai who tries to predict people who commit crimes based on their appearance.
- ai that uses biometry to infer the characteristics of a person, such as their sexual orientation.
- ai that collects biometric data “in real time” in public places for the purposes of the application of the law.
- ai who tries to infer people's emotions at work or school.
- ai that creates, or expands, facial recognition databases scraping online images or security cameras.
The companies that are using any of the ai previous applications in the EU will be subject to fines, regardless of where they are headquarters. They could be in the hook for up to 35 million euros (~ $ 36 million), or 7% of their annual income of the previous fiscal year, which is greater.
The fines are not activated for some time, said Rob Sumroy, head of technology of the British law firm Slaughter and May, in an interview with Techcrunch.
“Organizations are expected to fulfill February 2, but … the next deadline that companies must take into account is in August,” Sumroy said. “By then, we will know who the competent authorities are, and fines and execution provisions will enter into force.”
Preliminary promises
The deadline of February 2 is somehow a formality.
Last September, more than 100 companies signed the EU ai Pact, a voluntary promise to begin applying the principles of Law ai before entry to the request. As part of the pact, the signatories, who included amazon, Google and Openai, promised to identify the ai systems that will probably be classified as high risk under the law of ai.
Some technological giants, especially goal and Apple, skipped the pact. The French startup of the Mistral, one of the toughest criticisms of Law ai, also chose not to sign.
That does not suggest that Apple, Metral, Mistral or others who do not agree with the PACT will not comply with their obligations, including the prohibition of unacceptably risky systems. Sumroy points out that, given the nature of established prohibited use cases, most companies will not participate in these practices anyway.
“For organizations, a key concern about EU's law is whether clear guidelines, standards and codes of conduct will arrive on time, and crucially, if they will provide organizations clarity about compliance with organizations,” Sumroy said. “However, working groups so far are fulfilling their deadlines in the code of conduct for … developers.”
Possible exemptions
There are exceptions to several of the prohibitions of Law ai.
For example, the law allows the application of the law to use certain systems that collect biometry in public places if these systems help to perform a “specific search” to, for example, a victim of kidnapping, or to help prevent a threat ” Specific, substantial and imminent “to life. This exemption requires the authorization of the appropriate government body, and the law emphasizes that the police cannot make a decision that “produces an adverse legal effect” on a person based solely on the results of these systems.
The law also causes exceptions for systems that infer emotions in workplaces and schools where there is a justification for “medical or security”, such as systems designed for therapeutic use.
The European Commission, the EU Executive Branch, <a target="_blank" href="https://digital-strategy.ec.europa.eu/en/news/commission-launches-consultation-ai-act-prohibitions-and-ai-system-definition” target=”_blank” rel=”noreferrer noopener nofollow”>He said he would launch additional guidelines In early 2025 “, after a consultation with the interested parties in November. However, these guidelines have not yet been published.
Sumroy said it is not clear how other laws in books could interact with the prohibitions and related provisions of Law ai. Clarity cannot arrive until later in the year, as the application window approaches.
“It is important that organizations remember that ai regulation does not exist in isolation,” said Sumroy. “Other legal frameworks, such as GDPR, NIS2 and Dora, will interact with Law ai, creating potential challenges, particularly around the notification requirements of overlapping incidents. Understanding how these laws fit will be as crucial as understanding the act of ia itself. “
(Tagstotranslate) ai generative