Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Picture an assistant that reads the brief, drafts a plan, grabs the right tools, remembers what happened five minutes ago, and changes course when a step fails. When you set a goal, it decomposes the work, pulls data through application programming interfaces, retries intelligently, and closes the loop.

That is agentic artificial intelligence, and its autonomy is powering rollouts in customer support, finance and workflow management while exposing the seams in EU General Data Protection Regulation routines.

The GDPR's compass still points true: purpose limitation, data minimization, transparency, storage limitation and accountability. What is failing is the operating model around those principles: stable data flows, predictable toolchains and human approvals at key points.

When an AI agent rewrites its plan mid-run and calls an application programming interface that never made it into your data protection impact assessment, static controls collapse. The fix is to shift from documents to mechanisms that enforce policy at runtime.

A concrete example

An AI agent is deployed to triage inboxes, draft replies and manage calendars.

A user requests, "Please schedule a follow-up with Dr. Rossi the week of 14 Oct." The agent pulls recent threads for context, inspects availability, and checks travel time via a map API. It uploads the doctor's last message to a third-party summarizer to extract proposed dates, runs a translation service hosted outside the European Economic Area to clarify an Italian phrase, and stores vector embeddings of the message and invite so it can recognize similar tasks later.

Noticing an attached discharge note that mentions diabetes, the AI agent adds the label "endocrinology" in the mailbox to assist with future triage. When it sees a conflict with a "Union organizing meeting," it pings a scheduling plugin that negotiates a new slot with that meeting's attendees.

From a GDPR perspective, a lot just happened. The original purpose, "schedule a meeting," broadened into health-related inference and labeling, which can trigger special-category rules that prohibit processing absent an Article 9 condition, such as explicit consent.

Unvetted disclosures occurred through summarization and translation services that may be processors or independent controllers, which must be assessed under the European Data Protection Board's functional test for roles. A cross-border transfer likely occurred, which requires a transfer tool, such as the European Commission's standard contractual clauses, and a transfer risk assessment.

The agent also created derived artifacts that persist beyond the task, which must respect storage limitation. If an agent’s scheduling decision has a significant impact on the user and lacks meaningful human oversight, Article 22 imposes restrictions on fully automated processing and requires transparent disclosure of the decision logic upon request.

None of this means the GDPR is obsolete. It means paper controls and periodic audits cannot carry the load alone. The answer is to turn compliance into engineering so governance travels with the system at runtime.

Four build-once controls privacy teams should require

Purpose locks and goal-change gates. Treat the AI agent's goal as a first-class, inspectable object. If the agent proposes to expand scope, for example from "schedule follow-up" to "triage health content and tag the mailbox," the platform should surface the change, check lawful basis and compatibility, and either block, request fresh consent, or route to a human approver.

That is how you keep processing within purpose-limitation guardrails under the GDPR's Article 5(1)(b) and avoid drifting into Article 9 territory without the right condition.

Execute end-to-end records as a product requirement. Create a trace — a durable, searchable record — that includes the plan the agent generated, each tool call executed, data categories observed or produced, where data went, and every state update.

With that trace, data subject access requests stop being archaeology because what personal data was processed, by which components, when, and for what sub-purpose can be identified and the "meaningful information about the logic" required in automated contexts under Article 15 of the GDPR can be provided.

The EU AI Act points in the same direction for high-risk systems by requiring automatically generated logs and post-market monitoring, so one trace can serve both AI accountability and GDPR duties.

Memory governance with tiers. Not all memories are equal. Short-lived working memory that stores a few turns of context has a different risk profile than long-lived profiles or vector embeddings. Treat them differently. Enforce a strict timeline for data to exist for ephemeral state.

Use purpose-scoped namespaces and retention budgets for long-term data storage. Make deletion and unlearning callable operations and capture evidence that primary data and derived artifacts were handled according to policy. These are pragmatic ways to implement storage-limitation and privacy-by-design obligations.

Live controller and processor mapping. In an agentic stack, roles can differ per use. A plugin may act as your processor on one call and determine purposes and means on the next. Maintain a registry that resolves roles at runtime, ties each resolution to contractual hooks, and records cross-border pathways. 

The EDPB's guidelines on controllers and processors stress that role allocation is functional and must reflect actual purposes and means. Mapping should encode that rule. Couple it with prechecked transfer tools such as SCCs.

Implement in operations without stalling delivery

Swap one-time privacy reviews for continuous governance. Start with a predeployment set of controls that test AI agents against synthetic and edge-case scenarios to detect over-collection, unauthorized tool use, and purpose drift before launch.

Continue in production with real-time policy enforcement: allow lists for tools and plugins, data egress filters, geofencing, sensitive category detectors and kill switches when the agent strays out-of-bounds.

Give privacy teams dashboards they will use. Prioritize lineage by run and by user, transfer destinations and legal bases, role resolutions, and open DSAR and DPIA links that directly anchor to the relevant execution traces. Tie model and tool versions into the same view to correlate behavior and software changes.

Contracts should match runtime realities. If agents are exposed to third-party plugins, require trace-level logging and deletion APIs in processor agreements and DPIAs. Make sub-processor notifications event-driven rather than quarterly emails. Where roles may differ by request or operation, specify a deterministic role-resolution policy in the contract and implement the same logic in code with conformity tests.

Reach a prior understanding on the legal mechanism for transfers and the update pathway if vendor locations change, consistent with Chapter 5 requirements.

Finally, treat privacy by design as a team sport with shared artifacts. Embed a small privacy engineering squad inside the AI platform group. Give that team the mandate to implement the four controls: purpose locks, execution traces, memory controls and controller-processor cartography.

Product teams should get software development kits and policy-aware middleware so compliant patterns are the default rather than the exception. That approach concretely implements Article 25's design-and-default duties.

Why this accelerates innovation

Skeptics worry this slows innovation. In practice, the opposite happens. Purpose locks prevent expensive incidents. Traces reduce emergency response because answers are discoverable rather than reconstructed. Memory policies simplify DSARs and deletion requests.

Most importantly, these controls let you say yes to agentic AI use cases with confidence because the controls are real and the evidence exists.

Bottom line

The GDPR's ideals are not the problem; the implementation model is. Agentic AI requires moving from static documents to live mechanisms that operate as the system runs. If that shift is made, the law remains workable, the systems stay useful, and trust becomes something that can be demonstrated rather than merely asserted.

Keivan Navaie is professor of intelligent networks at Lancaster University's School of Computing and Communications and formerly served as principal AI technology advisor to the U.K. Information Commissioner's Office.