Organizations' broad use of agentic artificial intelligence has introduced a new set of challenges that extend beyond those posed by traditional large language models.
Using AI agents to complete user tasks can blur distinctions between data controllers and processors due to their ability to make and execute decisions with limited human input. The use of these tools raises concerns about technology’s potential impact on user choices and how individuals can protect their privacy while using these agents.
Potential vulnerabilities within LLMs alone are a significant concern for organizations, but the implementation of AI agents could raise more complex privacy implications. With an agent’s ability to act autonomously and learn in real time, system flaws or security incidents involving malicious actors could have steep consequences for businesses and consumers alike.
Hogan Lovells Partner Bret Cohen noted during a recent Future of Privacy Forum webinar with representatives from Anthropic and Snap that these risks are no longer just theoretical. “You can think of all types of things where the development of AI agents is treated in terms of probabilistic outcomes, and if a threat actor decides to exploit that, it could lead to very negative real-world outcomes," he said.
Key challenges
Companies have begun introducing safeguards and protocols to navigate AI risks to better protect everyday use cases. However, agentic tools' autonomy and memory could complicate the equation from the security standards and compliance standpoints.
Anthropic Head of Product Public Policy Ashley Zlatinov noted agentic AI can retain information across tasks and simultaneously access multiple systems.
"These capabilities were somewhat available before, but in a much more targeted way," Zlatinov said. Now, agents can access vast amounts of user data with data sources like calendars, emails, and travel systems.
Interconnectedness associated with agentic functions also raises questions about how to obtain meaningful user consent. Zlatinov indicated consent mechanisms that prompt the user before every access could lead to “consent fatigue,” similar to how consumers have come to dismiss cookie banners.
"Every 15 or 20 seconds, it’s asking, ‘Do you approve this?'" she said. “I think there's really a careful balance of what are the really high-risk areas that we always need to have consent on. Are there certain areas that maybe could be fine-tuned by the user in terms of 'I don't care if the agent is accessing these things, but I do care if it's doing this over here'?"
This balance could determine whether agentic systems can be responsibly deployed. Too many prompts could risk user indifference and may fail to satisfy regulators’ expectations of transparency and control under data protection laws such as the EU General Data Protection Regulation.
To navigate these concerns, Anthropic, alongside several other tech companies with AI agents are developing a "Swiss cheese approach, where you have many layers, so if anything falls through, there's still other security layers to pick that up," said Zlatinov.
The safeguards include strict allow-lists limiting which websites agents can access, dedicated threat-intelligence teams for continuous monitoring, and red-teaming exercises with external experts before any feature releases.
Strong technical defenses remain vulnerable to “uncontrolled input,” noted Snap Cybersecurity, Privacy and AI Associate General Counsel Justin Webb. He used the example of giving AI agent access to a personal calendar.
“I can send you any calendar invite in the world,” Webb said. “If it’s reading your calendar, I can potentially compromise you just by sending an invite.”
That lack of control, he argued, means organizations may need to distinguish between internal-only agents and those exposed to the internet, layering additional filters and monitoring systems with higher exposure.
Proactive approach to responsible use
To combat some of these challenges, Webb indicated robust system testing is at the forefront of potential solutions.
“You might want to set up a matrix of when you want humans in the loop,” he said, adding “there’s a bias toward logging and testing” that could require clearer determinations for when agents should and should not make decisions.
Webb recommended organizations ensure they don’t delude themselves into thinking that AI agents are an "objective outside observer," especially taking into account that agents "take on the biases of the individuals that train them."
Third-party provider evaluations can also adopt consideration for how much is being invested into alignment research and testing, rather than assuming any model will act neutrally.
Developers have experimented with “scratchpad” features that display an agent’s reasoning, but that logging could create complications. Webb noted if companies are “generating all these logs about how autonomous systems are making decisions, [organizations] have to think about whether that becomes discoverable in litigation."
Lexie White is a staff writer for the IAPP.
