Italy’s data protection watchdog has explained what OpenAI must do to get it to lift an injunction against ChatGPT issued late last month, when it said it suspected the AI chatbot service of breaching the General Data Protection Regulation. (GDPR) of the EU. and ordered the US-based company to stop processing the data from the premises.
The EU GDPR applies whenever personal data is processed, and there’s no doubt that big language models like OpenAI’s GPT have pulled vast amounts of stuff from the public internet to train their generative AI models so they can respond humanely. as a form of natural language prompts.
OpenAI responded to the order of the Italian data protection authority by quickly blocking access to ChatGPT. In a brief public statement, OpenAI CEO Sam Altman also tweeted confirmation that it had stopped offering the service in Italy, doing so alongside Big Tech’s usual warning that it “thinks[s] We are following all privacy laws.”
Obviously, Guarantor of Italy has a different opinion.
The short version of the regulator’s new compliance lawsuit is this: OpenAI will have to be transparent and publish an information notice detailing its data processing; should immediately adopt age screening to prevent minors from accessing technology and move to stronger age verification measures; you need to clarify the legal basis you claim to process people’s data to train your AI (and you can’t rely on performance of a contract, which means you have to choose between consent or legitimate interests); You also have to provide ways for users (and non-users) to exercise their rights over their personal data, including requesting corrections of misinformation generated about them by ChatGPT (or otherwise having their data deleted); You must also provide users with the ability to object to OpenAI’s processing of their data to train its algorithms; and you need to run a local awareness campaign to inform Italians that you are processing their information to train your AIs.
The DPA has given OpenAI a deadline, April 30, to do most of that. (The local radio, TV and internet awareness campaign has a slightly more generous May 15 schedule to run.)
There’s also a bit more time left for the added requirement to migrate from the immediately required (but weak) age verification child safety technology to a more difficult to circumvent age verification system. OpenAI has until May 31 to submit a plan to implement age verification technology to screen out users under the age of 13 (and users ages 13-18 who have not obtained parental consent), with the deadline to have that stronger system in place. to September 30.
in a Press release detailing what OpenAI must do to get it to lift the temporary suspension of ChatGPT, ordered two weeks ago when the regulator announced it was beginning a formal investigation of alleged GDPR infringements, he writes:
OpenAI must comply before April 30 with the measures established by the Italian SA [supervisory authority] on transparency, the right of data subjects (including users and non-users) and the legal basis of processing for algorithmic training based on user data. Only in that case, the Italian SA will lift its order that imposed a temporary limitation on the processing of Italian users’ data, since the urgency that supported the order no longer exists, so ChatGPT will be available again from Italy.
Going into more detail about each of the required “concrete measures”, the DPA stipulates that the mandatory information notice must describe “the arrangements and logic of data processing required for the operation of ChatGPT along with the rights granted to data subjects.” (users and non-users)”, adding that “it will have to be easily accessible and be placed in a way that it can be read before registering with the service”.
Users in Italy must be presented with this notice before registering and also confirm that they are over the age of 18, further requires. While users who registered prior to the DPA order to stop data processing will need to be shown the notice when accessing the reactivated service and also pushed through an age barrier to filter users minors.
On the question of the legal basis attached to OpenAI’s processing of people’s data to train its algorithms, Garante has narrowed the available options down to two: consent or legitimate interests, stipulating that it must immediately remove all references to the execution of a contract “in line with the [GDPR’s] principle of responsibility”. (OpenAI Privacy Policy currently cites all three reasons, but seems to rely more on fulfilling a contract to provide services like ChatGPT).
“This will be without prejudice to the SA’s exercise of investigative and enforcement powers in this regard,” it adds, confirming that it refrains from judging whether the remaining two grounds can also be legally used for OpenAI purposes.
In addition, the GDPR provides data subjects with a set of access rights, including the right to correct or delete their personal data. That is why the Italian regulator has also demanded that OpenAI implement tools so that data subjects, which means both users and non-users, can exercise their rights and rectify the falsehoods that the chatbot generates about them. Or, if correcting AI-generated lies about named individuals is determined to be “technically unfeasible,” the DPA stipulates that the company must provide a way to delete their personal data.
“OpenAI will have to make easily accessible tools available to allow non-users to exercise their right to object to the processing of their personal data on which the operation of the algorithms is based. The same right will have to be granted to users if legitimate interest is chosen as the legal basis for processing their data”, he adds, referring to another of the rights that GDPR grants to interested parties when legitimate interest is relied on. as the legal basis for the processing. personal information.
All the measures that Guarantor has announced are contingencies, based on his preliminary concerns. And its press release notes that its formal investigations — “to establish possible breaches of the law” — are continuing and could lead to it deciding to take “additional or different steps if necessary in completing the ongoing investigative exercise.” ”
We reached out to OpenAI for a response, but the company had not responded to our email at press time.