IAPP AI Governance Global North America 2025
BOSTON
18-19 September
Minding Mindful Machines: AI Governance Considerations for AI Agents
Friday, 19 Sept.
09:00 - 10:00 EDT
Intermediate level
Leading large language model (LLM) developers (including OpenAI, Google, Anthropic) have released early versions of technologies described as “AI agents.” Unlike earlier automated systems and even LLMs, these systems go beyond previous technology by having autonomy over how to achieve complex, multi-step tasks, such as navigating on a user’s web browser to take actions on their behalf (e.g., making restaurant reservations and resolving customer service issues). While this could enable a wide range of useful or time-saving tasks, the latest AI agents also raise greater and novel AI governance risks related to the collection and processing of personal data, output accuracy, safety testing and human oversight challenges. For example, while current LLM-based systems may train and operate using personal data, they lack the tools (e.g., application programming interfaces, data stores, and extensions) to access external systems and data. In contrast, the latest AI agents may be equipped with these tools, which could enable them to obtain real-time information about individuals. This panel will unpack the defining characteristics of the newest AI agents, identify some of the AI governance considerations that practitioners should be mindful of when designing and deploying these systems, and highlight potential responses to these AI governance risks (e.g., on-device processing and data collection limitations).
What you will learn:
- The defining characteristics of the latest AI agents and how they differ from existing LLMs.
- The AI governance questions raised by LLMs, such as challenges related to the collection and processing of personal data for model training.
- How the unique design elements and characteristics of the latest agents may exacerbate or raise novel AI governance challenges around the collection and disclosure of personal data, security vulnerabilities, the accuracy of outputs, barriers to alignment, explainability and human oversight.
Moderator and speakers

Daniel Berrick
CIPP/E, CIPP/US
Senior Policy Counsel for AI
Future of Privacy Forum

Jared Bomberg
U.S. Policy Lead, Privacy and Data Strategy

Liza Cotter
CIPP/US
Privacy and Cybersecurity Partner
Weil, Gotshal & Manges