Claiming several privacy rule infractions, Italy’s Data Protection Authority (IDPA) fined OpenAI €15 million ($15.7 million). An analysis of ChatGPT, OpenAI’s AI model,’s data-collecting methods led to the fine.
According to the IDPA’s investigation, OpenAI neglected to tell the agency about a notable March 2023 data breach. Apart from that, the corporation trained its artificial intelligence model using personal data without a legitimate legal basis, therefore violating European Union General Data Protection Regulation (GDPR) openness and information duties.
The agency also expressed worries over the lack of appropriate age verification policies, which let kids under thirteen use the network. This disparity ran the danger of exposing young people to materials inappropriate for their level of knowledge and age.
The corrective actions have directed OpenAI to launch a six-month public awareness campaign. We will run this campaign across various media outlets such as radio, television, newspapers, and the internet. Its objectives are to inform consumers of their rights to reject, correct, or erase personal data from the system, as well as to teach the public how ChatGPT gathers and uses data.
Users should understand how their data is handled and how to use GDPR rights by campaign’s end. Businesses that disobey GDPR run the risk of fines of up to €20 million or 4% of their worldwide income.
OpenAI shifted its European offices to Ireland during the probe, assigning supervisory duties to the Irish Data Protection Authority for any upcoming inspection. The IDPA pointed out OpenAI’s helpful attitude during the investigation, which helped reduce the penalties.
This evolution signals yet another chapter in the continuous control of artificial intelligence technologies through regulations. The situation of OpenAI emphasizes in the fast-developing field of artificial intelligence the need for strong data protection policies and open procedures.