AI hallucinations — instances where general-purpose artificial intelligence systems generate convincing, yet false, information — present significant challenges under the EU General Data Protection Regulation, especially regarding the principle of accuracy and data subject rights. Recent complaints against platforms like ChatGPT have underscored these issues.
Drawing from a more detailed working paper, this article argues the regulatory approaches proposed by the Hamburg Commissioner for Data Protection and Freedom of Information and the U.K. Information Commissioner's Office address these concerns in a way that protects data subjects' rights while fostering innovation.
They achieve this by focusing regulatory attention on the outputs of general-purpose AI systems rather than the internal workings of large language models and by adopting a risk-based approach that considers the purpose and context of AI use, emphasizing the need for transparency and information.
In response, AI developers are implementing measures to reduce hallucinations, improve model accuracy and enable the exercise of data subject rights by concentrating on system outputs. A collaborative, balanced approach is advocated to manage AI hallucinations within the GDPR framework without hindering innovation in Europe.
The problem of AI hallucinations
AI hallucinations occur when, for several reasons, general-purpose AI systems produce content that is convincing but false or nonsensical. Highly publicized instances in 2022–23 led critics to label these models negatively, accusing them of disseminating "careless speech."
Despite significant improvements in 2024, hallucinations persist, undermining AI reliability, especially in contexts where accuracy is crucial, such as legal matters, health care, and news reporting.
In the realm of data protection, the consumer organization NOYB filed a complaint in April 2024 with Austria's Data Protection Authority, claiming ChatGPT incorrectly stated the birthdate of a public figure, presumed to be its founder Max Schrems. NOYB claimed this violated article 5(1)(d) of the GDPR and the principle of accuracy.
It argues that, based on this article, the controller has an obligation to erase or rectify inaccurate data without delay, but ChatGPT owner, OpenAI, failed to do so, despite being "made aware of the accuracy issue by the data subject."
NOYB explains that OpenAI responded by stating "the only way to prevent the inaccurate information from appearing would be to block any information concerning the data subject. This would in turn violate the controller's freedom to inform and the general public's right to be informed, as the data subject is a public figure."
NOYB also contends that OpenAI violated articles 12(3) and 15 of the GDPR related to the right of access by the data subject because "the data subject has not received any information on what data concerning him is processed by OpenAI." As a result, NOYB is requesting corrective measures, including fines.
While these complaints are under examination, other DPAs have proposed nuanced approaches to address hallucinations.
Hamburg DPA: Focus on outputs
In July 2024, the Hamburg DPA published the "Discussion Paper: Large Language Models and Personal Data" based on the crucial distinction between a general-purpose AI system and any LLM it may incorporate. As the paper explains, an AI system, such as ChatGPT, consists of multiple components, of which an LLM is only one.
Following a similar position by Denmark's DPA, the Datatilsynet, the Hamburg DPA considers that LLMs do not contain personal data. "Within LLMs texts are no longer stored in their original form, or only as fragments in the form of these numerical tokens. They are further processed into 'embeddings.' … When training data contains personal data, it undergoes a transformation during machine learning process, converting it into abstract mathematical representations. This abstraction process results in the loss of concrete characteristics and references to specific individuals," claims the Hamburg DPA, who also explains why, according to the Court of Justice of the European Union, privacy attacks and personal data extraction do not mean LLMs contain personal data.
As a result, LLMs "can't be the direct subject of data subject rights under articles 12 et seq. GDPR. However, when an AI system processes personal data, particularly in its output or database queries, the controller must fulfill data subject rights."
The Hamburg DPA's guidance is particularly interesting for the purposes of our paper as it highlights that LLMs generate content dynamically, predicting the next word in a sentence based on patterns learned from vast amounts of data. They do not store information about individuals in discrete, retrievable records. Instead, any mention of personal data results from statistical correlations in the training data.
The inaccuracies, or hallucinations, are unintended artifacts of the generative process, not deliberate misrepresentations of stored personal data. Since LLMs lack discrete records and do not function as databases, applying the GDPR's accuracy requirement in the traditional sense may be neither feasible nor appropriate. The Hamburg DPA thus emphasizes a widely accepted view: the outputs of LLMs are probabilistic in nature, and despite the risk of occasional regurgitations, LLMs are "not databases from which outputs are pulled."
The Hamburg DPA's position that LLMs do not contain personal data has already led to a lot of discussions and criticisms.
A first set of criticisms asserts the likelihood of personal data extraction is enough to assume data storage, given the inherently informative nature of language generation. Critics emphasize this is particularly true for specific pieces of information widely present in training data, as a result of being widely present on the internet — for instance, Donald Trump's birthdate.
Others apply the binary distinction between anonymization and pseudonymization to argue it is possible to extract personally identifiable information through combinations of different parts found on LLMs. Others again claim what is relevant for the existence of personal data is whether the information processed in an LLM, whatever its form, may result in impact on individuals.
More broadly, many commentators find it difficult to reconcile the seemingly paradoxical situation in which an LLM claimed to be free of personal data can still generate outputs that contain such data. This is the "If it comes out, it must be in there" argument. These criticisms, as well as the Hamburg DPA's response, are included in my detailed study.
The jury is still out on whether and when LLMs contain personal data. Tokenization choices, anti-memorization and de-duplication measures, along with various other safeguards, can help minimize the possibility of extracting personal data.
Regardless of the resolution to this hotly disputed issue, the Hamburg DPA's paper is relevant in dealing with hallucinations and data subject rights, as it illustrates how fundamentally the functionality of LLMs differs from conventional data storage methods. It demonstrates how the way LLMs process tokens and vector representations, along with their probabilistic and generative functions, fundamentally differs from traditional data storage and retrieval systems where the GDPR's principle of accuracy has traditionally applied.
Models are not databases of information or structured repositories of facts or personal data. They do not operate by retrieving information from a database or by "copying and pasting" portions of existing data. As the Hamburg DPA explained, "The GDPR was conceived for a world where personal data is stored and processed in clearly structured databases. LLMs break this framework and present us with the challenge of applying current law to a new technology."
By shifting the focus from the LLM component to the AI system as a whole in the exercise of data subject rights, this guidance offers a GDPR-compliant, innovation-friendly solution to the issue of AI hallucinations. General-purpose AI system providers can mitigate AI hallucinations and effectively uphold data subject rights by concentrating on outputs, without resorting to measures that stifle innovation, such as constant re-identification attacks or continuous retraining of their LLMs. Such measures are not only technically challenging and sometimes inefficient — since inaccuracies are probabilistic in nature — but could also entail significant economic and environmental costs.
ICO: Focus on purpose, transparency
On 12 April 2024, the U.K. Information Commissioner's Office published provisional guidance and initiated a public consultation on the "accuracy of training data and model outputs." While the final guidance, enriched by numerous public responses, is expected soon, the provisional document already contains several noteworthy proposals that could complement the suggestions of the Hamburg DPA regarding AI hallucinations.
Unlike the HmbBfDI, the ICO does not address whether LLMs contain personal data. Instead, it emphasizes the purpose of the AI system as a whole and underscores the necessity of adequate information and transparency. The ICO correctly points out the accuracy requirements of an AI system can vary significantly depending on its specific application. Before determining the need for accuracy in a general-purpose AI system's outputs, organizations deploying such technology must first define the intended purpose of the model and assess its suitability for that purpose in collaboration with developers.
Generative AI models created solely for creative purposes do not require strict accuracy standards. Probabilistic outputs may lead to AI hallucinations but can also serve as a powerful tool for enhancing human creativity and generating new ideas and content. This potential extends beyond purely recreational or gaming contexts to a variety of generative AI functions used daily by millions of individuals with minimal risk of infringing on the GDPR's principle of accuracy.
These functions include creative writing that eschews factual constraints, brainstorming and idea generation, as well as tasks like translation, grammar correction and offering alternative phrasing for stylistic clarity. Restricting and over-regulating generative AI in the name of GDPR "accuracy" may be counterproductive and stifle innovation in such low-risk scenarios.
Conversely, as the ICO emphasizes, certain uses of general-purpose AI systems can pose risks to data subject rights, necessitating proactive measures by both developers and deployers to mitigate these risks. The ICO concludes, "Developers need to set out clear expectations for users, whether individuals or organisations, on the accuracy of the output." This is crucial, particularly when considering the human tendency to attribute greater capabilities to technology, a phenomenon known as automation or technology bias.
Companies' responses to hallucination challenges
General-purpose AI creators are addressing hallucinations by implementing measures in training data, LLM architecture, and, especially, system outputs, where data subject rights can be exercised. While not yet perfect, these efforts represent significant progress toward reducing hallucinations and complying with the GDPR's accuracy principle.
At the training data level, developers claim that they are improving data quality by avoiding untrusted sources and applying filtering to minimize biased or incorrect information. Within the LLM itself, enhancements include optimizing model architectures for better interpretability, introducing hallucination guardrails, and employing reinforcement learning from human feedback to refine responses and reduce inaccuracies. Regular updates with new data help prevent outdated information from affecting outputs.
However, most of the measures adopted, including several related to data subject rights, concern the level of general-purpose AI system outputs. As Google explains, "In general, the focus of privacy controls should be at the application level, where there may be both greater potential for harm (such as greater risk of personal data disclosure), but also greater opportunity for safeguards. Leakages of personal data, or hallucinations misrepresenting facts about a non-public living person, often happen through interaction with the product, not through the development and training of the AI model."
My longer study explains in detail and with specific sources the output safeguards introduced by LLM providers.
They include technical measures to avoid inaccurate outputs. Developers use prompt engineering, Named Entity Recognition, to identify references to individuals, and output filters to detect and block potentially inaccurate or harmful statements. Similarly, before generating output about a person, a general-purpose AI system can check whether there is a removal request for that person. Where a removal request has been approved, the system can suppress corresponding outputs. Following the Google v. Spain case law on search engines, which applies here with changes, a general-purpose AI system provider could, in some cases, refuse to approve such a request if it considers that this is not desirable due to "the role played by the data subject in public life" and "the preponderant interest of the general public in having access to the information" about this person.
However, in such cases, every reasonable effort should be undertaken, "within the framework of (GPAI system providers) responsibilities, powers and capabilities," as the CJEU famously said, to remove any inaccurate information about such a public person. Contrary to what happened in the NOYB case, output filters might become able in the future to remove specific parts of inaccurate information about a public person without removing all outputs concerning that person. Furthermore, a lot of progress is underway in order to enable general-purpose AI systems to use real-time fact-checking tools like retrieval-augmented generation to ground outputs in verifiable sources. General-purpose AI systems are also configured to avoid unnecessary references to identifiable persons.
LLM providers are also introducing transparency and user empowerment as safeguards. LLM interfaces now include warnings alerting users that the AI might not always be accurate, encouraging them to verify responses. Detailed terms of service and privacy policies explain how exactly the systems work and the inaccuracy risks involved. Features like "double-check" options — such as Gemini's "double-check" feature or ChatGPT Search — enable users to verify information through credible sources. User feedback and reporting systems are in place to improve filtering mechanisms based on real-world input.
Also, new general-purpose AI systems generate internal chains of thought, allowing models to reason and potentially reduce hallucinations, say "I don’t know" or otherwise avoid overconfident responses. This also empowers users to inspect the reasoning steps behind responses, promoting critical assessment. Progress in neuro-symbolic AI, which combines neural networks with symbolic reasoning, could further enhance accuracy and reliability.
In addition, developers have introduced mechanisms for users to exercise rights like access, deletion and rectification of personal data.
Conclusion
Addressing AI hallucinations in general-purpose AI systems presents a complex challenge: How can we uphold the GDPR's accuracy principle without inadvertently stifling technological innovation? Overly stringent interpretations of the GDPR — such as demanding absolute accuracy, imposing complete removal of personal data from outputs, mandating immediate elimination or rectification of personal data from models themselves, or imposing fines for any instance of inaccuracy (even if no harm results, such as when a general-purpose AI system guesses a celebrity's birth date incorrectly) — could lead to costly and technically challenging measures.
While aiming to protect individuals, such approaches might also limit the adoption and utility of AI applications in the EU, risking the EU's ability to use its computer science expertise for innovation and effective competition with the rest of the world.
The guidance provided by the Hamburg DPA and the ICO offers a more pragmatic and flexible approach to this issue. By focusing on the outputs of AI systems rather than the internal workings of LLMs, the Hamburg DPA acknowledges that even if LLMs themselves may not store personal data in a traditional sense, the outputs generated can still impact data subject rights.
This shift in focus allows for the protection of individuals without necessitating restrictive measures that could impede responsible technological progress in Europe. This also seems to align with the methodology followed by the CJEU in relation to search engines which focused on retroactive remedies, on the basis of a de-referencing request by the data subject. As the Hamburg DPA explains, "feared regulatory gaps will be closed by the (EU) AI Act, according to which LLMs can be regulated as AI models and removed from the market in case of legal violations (cf. Art. 93(1)(c) AIA)."
Similarly, the ICO emphasizes the importance of the purpose and context in which AI systems are used. By advocating for a risk-based approach that considers the intended application of the AI system, the ICO encourages developers and deployers to tailor accuracy requirements appropriately. This perspective recognizes that not all AI outputs need to meet the same levels of accuracy — creative applications may tolerate more flexibility, whereas systems used in critical contexts like health care or legal services require stricter accuracy standards.
General-purpose AI system providers have taken steps to align with these regulatory insights by implementing technical safeguards, enhancing transparency and enabling the exercise of data subject rights. As developers themselves acknowledge, these solutions are yet far from perfect, and more work needs to be done on several fronts, including introducing more clarity about the purposes of general-purpose AI systems, as the ICO invites them to do.
Still, these measures seem like steps in the right direction in a dynamic and constantly evolving field. Further progress in managing AI hallucinations in a GDPR-compliant way could result from ongoing scientific research and practical experience, helping to better address these complex issues.
Importantly, this article focused exclusively on situations where the general-purpose AI system and the LLM are deployed by the same data controller — such as OpenAI and the consumer version of ChatGPT — without the involvement of any third party data controllers/processors. Further research is needed to address scenarios where LLMs are utilized by third parties — for example, via an application programming interface — and where the responsibilities of each party in managing the risk of inaccurate outputs must be clearly defined. This clarification depends on specific factual and legal circumstances, including whether the parties are in a relationship of joint controllership, data processing on behalf of a controller, or independent controllership. In such cases, contractual and technical measures should help address these issues in a manner that protects data subject rights.
Furthermore, the scope of this article has focused solely on AI hallucinations, the GDPR principle of accuracy and related data subject rights. It has not addressed other significant issues, such as the conditions and legal basis under which LLMs can be trained with publicly available personal data. The interpretation of the GDPR on these matters, particularly the forthcoming Article 64(2) GDPR opinion of the European Data Protection Board, expected 23 Dec. 2024, could significantly and in very different ways affect the challenges and solutions related to AI hallucinations, the accuracy principle and related data subject rights.
Continuous collaboration and dialogue among regulators, industry stakeholders, civil society and researchers remain essential to further develop effective measures. This engagement will help refine existing strategies, address emerging challenges and support the development of AI systems that are both innovative and aligned with fundamental data protection principles.
Théodore Christakis is a professor of law at the University Grenoble Alpes in France and director of research for Europe with the Cross-Border Data Forum.