Alex Omenye
Efforts by OpenAI to reduce factual inaccuracies in its ChatGPT chatbot fall short of meeting full compliance with European Union data regulations, according to a task force from the EU’s privacy watchdog.
The task force reported on its website on Friday that while measures to enhance transparency help prevent misunderstandings of ChatGPT’s output, they do not suffice to fulfill the data accuracy principle required by EU rules.
“Although the measures taken to comply with the transparency principle are beneficial to avoid misinterpretation of the output of ChatGPT, they are not sufficient to comply with the data accuracy principle,” stated the task force.
This task force, established last year by the body uniting Europe’s national privacy watchdogs, was initiated after concerns were raised by national regulators, led by Italy’s authority, about the widely used AI service.
The report noted that investigations by national privacy watchdogs in some member states are still ongoing, making it impossible to fully describe the results at this stage.
The findings should be seen as a ‘common denominator’ among national authorities. Data accuracy is a fundamental principle of the EU’s data protection rules.
“Due to the probabilistic nature of the system, the current training approach leads to a model that may also produce biased or fabricated outputs,” the report explained. ”
Additionally, the outputs provided by ChatGPT are likely to be perceived as factually accurate by end users, including information about individuals, regardless of their actual accuracy.”