
Italy’s data protection authority has fined ChatGPT maker OpenAI 15 million euros ($15.66 million) over the way its generative artificial intelligence application processes personal data.
The fine was imposed nearly a year after Garante discovered that ChatGPT had processed users’ information to train the service in violation of the European Union’s General Data Protection Regulation (GDPR).
Authorities said OpenAI did not notify them of the security breach that occurred in March 2023 and processed users’ personal information to train ChatGPT without a sufficient legal basis. It also accused the company of violating the principles of transparency and its obligation to provide relevant information to users.

“Furthermore, OpenAI does not provide an age verification mechanism, which could lead to the risk that children under the age of 13 may be exposed to responses that are inappropriate in terms of their developmental level and self-awareness,” Galante said. Ta.
In addition to imposing a €15 million fine, the company was ordered to carry out a six-month communications campaign on radio, television, newspapers and the internet to promote public understanding of how ChatGPT works. Ta.
This includes, inter alia, the nature of the data (both user and non-user information) collected for the purpose of training the model, and the rights you can exercise to object to, rectify, or delete that data. Masu.
“Through this communication campaign, users and non-users of ChatGPT need to be made aware of how to counter generative artificial intelligence that is trained using their personal data, thereby effectively exercising their rights under the GDPR. We will be able to do it,” Galante added.
Italy became the first country to impose a temporary ban on ChatGPT in late March 2023, citing data protection concerns. Nearly a month later, access to ChatGPT was restored after the company addressed the issues raised by Garante.
In a statement shared with The Associated Press, OpenAI said the decision was disproportionate, with the fine representing nearly 20 times the revenue it earned in Italy over the same period, and said it intended to appeal. Additionally, the company said it is committed to providing useful artificial intelligence that respects users’ privacy rights.
The ruling also follows the opinion of the European Data Protection Board (EDPB) that AI models that unlawfully process personal data but are subsequently anonymized before deployment do not violate the GDPR.
“If it can be demonstrated that the subsequent operation of the AI model does not involve the processing of personal data, the EDPB considers that the GDPR does not apply,” the board said. “Therefore, the illegality of the initial processing should not affect the subsequent behavior of the model.”

“Furthermore, the EDPB considers that if the controller processes personal data collected during the deployment phase after the model has been anonymized, the GDPR applies with respect to these processing operations.”
Earlier this month, the Council also published guidelines for handling data transfers outside non-European countries in a GDPR-compliant manner. The guidelines will be subject to public consultation until 27 January 2025.
“Judgments and decisions by third country authorities cannot be automatically recognized or enforced in Europe,” it said. “When an organization responds to a request for personal data from a third country authority, this data flow constitutes a transfer and is subject to the GDPR.”