The right to be forgotten, codified in Article 17 of the EU General Data Protection Regulation, was originally conceived as a privacy safeguard. But its deeper function lies in self-determination: the individual's authority to decide when their past ceases to define their present.
Generative artificial intelligence undermines that autonomy by making "memory" probabilistic. A large language model does not store text as static records but as distributed patterns of statistical association. To remove one person's data requires altering billions of interdependent parameters, effectively reconfiguring the model's identity. Unlike a spreadsheet, an AI cannot simply "delete row 42."
The technical frontier of 'unlearning'
This challenge has spawned a growing field of research known as machine unlearning: methods that aim to make a model forget specific data without retraining it from scratch.
In principle, unlearning offers a bridge between human rights law and technical feasibility. In practice, it is fraught with trade-offs. Retraining from zero after every deletion request would guarantee compliance, but at astronomical computational cost.
More efficient methods can approximate forgetting but often leave residual traces, such as: gradient subtraction — unlearning without retraining; influence-function updates — which measure how each data input influences prediction patterns over a specific testing point; or sharded retraining — which splits datasets into smaller "shards."
The problem is measurement. How does one prove that forgetting has occurred? Recent efforts like the NeurIPS 2023 Machine Unlearning Challenge have sought to create benchmarks for evaluating unlearning effectiveness, from adversarial testing to model behavior comparisons. However, there is still no consensus on what constitutes "successful" erasure in probabilistic systems.
New research: Source-free unlearning
A major breakthrough came in September 2025 when researchers at the University of California, Riverside, proposed a new method called "source-free unlearning." Traditionally, unlearning requires access to the original dataset, a major obstacle since training data for commercial LLMs often cannot be reconstructed or retained for privacy reasons.
Researchers developed a certified unlearning method that operates without the original source data. Their technique uses a surrogate dataset to guide a single-step Newton update to the model, followed by carefully calibrated random noise to eliminate any lingering traces of the targeted information.
Their process achieves performance comparable to full retraining, while using only a fraction of the computational power. This makes it potentially transformative for compliance with the GDPR's right to erasure and the California Consumer Privacy Act's right to delete, both of which have long appeared unenforceable against generative models.
The method has been tested primarily on smaller classifiers, not the sprawling LLMs like ChatGPT or Claude that power modern chatbots. And still, scaling the approach to such architecture poses new challenges, especially since LLM training involves trillions of parameters and opaque data mixtures.
Still, "source-free unlearning" marks an important conceptual shift: forgetting can be certified statistically, even when literal deletion is impossible.
Unlearning algorithms
Two notable algorithms have emerged in the machine unlearning space for handling corrupted data in vision models. The first, Example-Tied Dropout, works by structurally separating a neural network's parameters during training into two categories: neurons responsible for generalizable, shared information; and neurons dedicated to memorizing example-specific information.
Each data point is assigned its own fixed "dropout mask," meaning its unique characteristics are funneled into a private computational path rather than the shared network. At inference time, those memorization neurons are simply dropped out, effectively suppressing example-specific information without retraining the model from scratch.
The second algorithm, Redirection for Erasing Memory, builds on ETD's intuition but takes a more robust approach designed to work across a broader range of unlearning scenarios. Rather than isolating memorization within existing neurons, REM redirects the influence of corrupted data into newly initialized, dedicated neurons added specifically for that purpose. Once training is complete, those neurons are removed or deactivated, cleanly eliminating the corrupted data's influence from the model.
Critically, REM is designed to handle corrupted data even when it has not been fully identified, which is a common real-world constraint, and regardless of whether that data follows random or structured patterns. In benchmark testing across multiple datasets and model architectures, REM was the only method to perform consistently well across the full range of unlearning conditions, making it a significant step forward for organizations seeking reliable, auditable data removal from trained AI systems.
Generative AI and the limits of forgetting
For policymakers, this evolution underscores a fundamental tension. Generative AI amplifies privacy risks precisely because it blurs the line between memory and inference. Even if personal data is deleted, the model may continue to generate information that indirectly reconstructs it.
This means that forgetting in the age of AI cannot be absolute. An insistence on perfect erasure risks regulatory paralysis or technological stagnation; abandoning the concept undermines individual dignity and autonomy.
U.S. law must therefore embrace a new paradigm: one that recognizes forgetting as a bounded right, as the EU does, constrained by the physics of computation but still meaningful as a check on power.
Trans-Atlantic policy developments
The European General Court's September 2025 decision dismissing a challenge to annul the EU–U.S. Data Privacy Framework illustrates this balancing act. The court reaffirmed that U.S. entities certified under the framework can process EU personal data, while emphasizing that adequacy decisions must evolve as technologies change.
For AI governance, that adaptability is critical. The DPF facilitates cross-border data flows essential for model training but is silent on unlearning. Without clear trans-Atlantic standards, U.S. companies risk legal whiplash: bound by the GDPR's Article 17 in principle but lacking the tools to comply in practice.
Absent harmonized guidance, the next "Schrems II" could emerge from the unlearning gap, where courts, again, find that trans-Atlantic data protections are inadequate for an AI-driven era.
Autonomy without illusion
The "right to unlearn" should not be mistaken for a simple extension of the right to be forgotten. It represents a new synthesis of law, ethics and computation: one that accepts the limits of deletion while reaffirming the centrality of human dignity.
In practical terms, this means building layered safeguards: offering rigorous deletion where feasible; requiring statistical and cryptographic guarantees where perfect erasure is not possible; and implementing accountability mechanisms that ensure transparency and trust.
As researchers continue to innovate by finding ways for neural networks to unlearn private or copyrighted information without costly retraining, the law must evolve in tandem.
The right to be forgotten was forged in an era of search engines; the right to unlearn must be crafted for an age of generative systems. It will test whether democracies can reconcile technical realism with human dignity; not by denying computational limits, but by building adaptive institutions that govern within them.
Nicoletta Kolpakov is the director of the Cirrus Institute for AI and Data Governance.

