OpenAI is under scrutiny again in Europe, facing a fresh privacy complaint over ChatGPT’s tendency to generate false and defamatory information.
The case, filed by privacy advocacy group Noyb, involves a Norwegian man who was falsely described by the AI as a convicted murderer of his own children.
Noyb argues that ChatGPT’s errors violate the European Union’s General Data Protection Regulation, which mandates that personal data be accurate and correctable. Currently, OpenAI offers no way for individuals to rectify incorrect AI-generated information, raising concerns over compliance.
“The GDPR is clear: personal data has to be accurate,” said Joakim Söderberg, a data protection lawyer at Noyb. “You can’t just spread false information and then add a disclaimer saying it may not be true.”
The Norwegian Data Protection Authority is reviewing the complaint, but enforcement could prove complex. A previous Noyb-backed complaint in Austria (April 2024) was referred to Ireland’s Data Protection Commission, where it remains under investigation.
Past GDPR enforcement actions have led to significant changes. In 2023, Italy temporarily banned ChatGPT, forcing OpenAI to revise its data disclosure practices. The company was later fined €15 million for processing personal data without a legal basis.
This latest case could intensify regulatory pressure on OpenAI. If found in violation, the company could face fines of up to 4% of its global annual revenue and be required to adjust ChatGPT’s operations in Europe.