Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Privacy has always evolved to keep pace with technology. We adjusted to cloud storage, machine learning and the Internet of Things. But agentic artificial intelligence systems — meaning systems that plan, reason and act autonomously — mark a more fundamental shift.

Unlike prompt-based models, which generate text or answers within predefined constraints, agentic AI systems behave like independent actors. They pursue goals, call application programming interfaces, chain together multistep reasoning, and even collaborate with other agents. They are not limited to analyzing data; they act upon it. And because their behavior is emergent and often unpredictable, they pose legal and regulatory challenges that earlier generations of AI never raised.

This shift forces us to ask hard questions. Who is the data controller when the AI system itself determines the means of processing? How do we ensure accountability when decisions emerge from autonomous planning rather than a human-defined rule set? And above all, how do we preserve privacy rights in a world where AI is continuously inferring, adapting and acting in real time?

One of the strongest answers available today is differential privacy.

Why agentic AI demands a new privacy paradigm

Supervisory authorities, such as the European Data Protection Board and the U.S. Federal Trade Commission, have been clear: privacy obligations apply regardless of the underlying technology. Yet agentic AI presents risks that older paradigms of privacy cannot easily contain.

These systems are inherently opaque. They decide not only how to process information but also why. They chain inferences together in ways that can reconstruct sensitive personal profiles. They act on data in real time, personalizing interventions without consistent human oversight. And many are designed to self-improve through reinforcement learning or fine-tuning, creating feedback loops that blur the line between temporary processing and permanent retention.

Traditional privacy techniques — pseudonymization, static anonymization, purpose-bound datasets — struggle in this environment. By contrast, differential privacy provides a framework that can accommodate continuous, adaptive and unpredictable processing.

Differential privacy as a legal safeguard

Differential privacy offers a mathematical guarantee that the inclusion or exclusion of any one individual's data does not materially affect the result of a computation. Privacy risk is quantified through the epsilon parameter, which measures the trade-off between privacy protection and analytical utility.

This isn't just a technical safeguard. It maps directly to key legal principles. Differential privacy operationalizes "privacy by design" under Article 25 of the EU General Data Protection Regulation by embedding protections into the system itself. It reinforces data minimization by discouraging reliance on raw, identifiable data. It mitigates re-identification risks, meeting the threshold of Recital 26 of the GDPR more robustly than pseudonymization alone. And because epsilon values can be documented and audited, differential privacy enhances accountability under Article 5(2) of the GDPR.

In short, differential privacy enables data-driven insights without eroding individual rights; a rare example of a technical method that speaks fluently to legal obligations.

Where it matters most

Consider customer service agents that learn from chat histories. Differential privacy ensures that system improvements do not inadvertently encode or reveal sensitive details. 

Or take enterprise productivity assistants: differential privacy allows usage trends to inform recommendations without exposing any one employee's behavior, a crucial safeguard in workplace monitoring contexts. 

In health care, differential privacy-protected training reduces the risk of outputs exposing identifiable patient records.

Even in reinforcement learning from human feedback, where models adapt to user clicks, rankings or corrections, differential privacy ensures no single individual's behavior deterministically shapes the system. Across these contexts, differential privacy limits exposure without halting innovation.

Differential privacy's boundaries

Differential privacy is powerful but not absolute. It protects outputs, not the raw inputs themselves. If an agent processes personal data before differential privacy is applied, upstream risks remain. Choosing the epsilon parameter is also a delicate exercise: values under one are considered strong but may degrade utility, while values above 10 offer little meaningful protection. Consent obligations for sensitive data also persist, regardless of whether differential privacy is applied.

And regulators will not be impressed by cosmetic implementations. True compliance requires robust, formally validated libraries — for example, Google's TensorFlow Privacy, IBM's Differential Privacy Library, or Microsoft's SmartNoise — rather than ad hoc noise injection.

Beyond differential privacy: Stewardship and segregation

Differential privacy is one way to limit the risks of agentic AI, but in some contexts the more appropriate safeguard is data stewardship with segregation of duties. The reason is simple: certain decisions require a structural guarantee that neither side of a data exchange can fully re-identify individuals.

Under this model, the recipient of the data sees it in anonymous form, ensuring that the insights they receive cannot be traced back to an individual. At the same time, the discloser of the data only operates with pseudonymous identifiers, preventing them from linking the information to real-world identities once it leaves their custody. By splitting knowledge in this way, no single actor has the full picture, which drastically reduces the risk of misuse, re-identification, or unauthorized inference.

This approach mirrors lessons from European Court of Justice cases like the case involving the Single Resolution Board, where courts emphasized that governance and procedural safeguards are as crucial as technical ones. Stewardship and segregation provide those safeguards institutionally: they hardwire anonymity for the recipient and pseudonymity for the discloser, creating a balanced system in which valuable analysis remains possible while individual privacy is structurally preserved.

Choosing the right PET for the right risk

While differential privacy offers a uniquely powerful safeguard against inference risks, it is not the only privacy-enhancing technology relevant to agentic AI. 

In contexts where agents learn collaboratively across organizations, federated learning or secure multiparty computation may be more appropriate, since they allow joint model training without exposing raw data. 

Where the concern is that agents will process highly sensitive inputs directly, homomorphic encryption or trusted execution environments can ensure computations happen on encrypted or isolated data. 

And in cases where the greatest risk lies not in the math but in institutional overreach, data stewardship and segregation provide governance-level safeguards, ensuring recipients only ever see anonymous data and disclosers only ever handle pseudonymous identifiers. 

In practice, protecting privacy in agentic systems will rarely hinge on one single PET. The most resilient solutions will combine technical and institutional measures, with differential privacy addressing inference risks while other PETs contain access, sharing and governance risks.

The path forward

For legal teams, the task is clear. Identify where agentic AI is being deployed. Examine whether training and fine-tuning rely on personal data. Update data protection impact assessments to account for inference chaining and cumulative privacy loss. Scrutinize claims of "differential privacy-enabled" tools against formal guarantees. And in sensitive contexts, consider pairing differential privacy with true segregation of duties under a stewardship model.

Agentic AI is not just another step in the AI journey. It represents a shift from analysis to autonomous action. That shift magnifies privacy risks and makes rigorous safeguards essential. Differential privacy provides one of the few mathematically provable protections that can withstand the unpredictability of agentic systems. But to fully meet regulatory expectations, it must be paired with governance, stewardship and oversight.

Privacy in the age of agentic AI will depend on both math and institutions. Only by combining the two can we build trust in systems that act as well as think.

Roy Kamp, AIGP, CIPP/E, CIPP/US, CIPM, CIPT, FIP, is legal director and Noemie Weinbaum, AIGP, CIPP/C, CIPP/E, CIPP/US, CIPM, CIPT, CDPO/FR, FIP, is privacy lead at UKG and managing director at PS Expertise.