Notes from the AI Governance Center: AI governance has officially been woven into the IAPP Global Summit
The integration of AI governance into the IAPP Global Summit programming no longer seemed like an addition to a privacy conference, writes Ashley Casovan.
Contributors:
Ashley Casovan
Managing Director, AI Governance Center
IAPP
Something felt different at this year's IAPP Global Summit. The integration of artificial intelligence governance into the program no longer seemed like an addition to a privacy conference. This year, the AI governance sessions, questions from participants, and hallway chatter sounded more informed, more specific and more nuanced.
High-level musings about the uptake and impact of AI or introductory information sessions about the EU AI Act evolved into action-oriented AI governance panels with practical examples, guidance and even takeaway frameworks, as well as meaningful dialogue between regulators and AI deployers on implementation questions.
While it's impossible to see everything and everyone at Summit, I was pleased to be able to attend several sessions and meet so many AI governance practitioners this year. These are my key takeaways from the IAPP Global Summit 2026.
Change as opportunity
To kick off, Travis LeBlanc's conversation with cognitive scientist Maya Shankar highlighted a series of stories about people who had experienced profound trauma and used those moments as an opportunity to ask a harder question: What is truly self-defining, and what is simply circumstance?
She shared a story about a prisoner that stuck with me. For her book "The Other Side of Change," Shankar interviewed a man who recounted his feelings prior to incarceration. He contemplated the person he would become, wondering what this experience would do to him. When faced with the realities of prison, he found there were more choices available to him than anticipated. He ended up writing poetry and mentoring younger inmates.
Most importantly, he made choices that led to a positive experience while there, which ultimately helped his future self.
While this example might seem like a far departure from the day-to-day experience of most AI governance professionals, it resonated with me since it is easy for us to feel like we are up against a behemoth. This often leads to a sense that we are losing agency when the circumstances seem too difficult, complex, or there are too many societal decisions already made for us.
This person's choices are a good reminder of how even in less-than-ideal circumstances we can create agency. We have the choice to accept that our circumstances are changing, whether we hoped for them to or not. Given our roles, individual choices can help work toward a positive outcome during this significant period of change that is upon us.
Inspired royalty
A highlight for many was the keynote by Prince Harry, Duke of Sussex, and his conversation with the IAPP's Joe Jones. Harry's lived experience in the public eye since birth has given him a unique view of privacy. For me, the most compelling part of his work was how his experience inspired meaningful reflection on the role and impact of social media and technology in our society. Harry's keynote speech and a recap of his conversation with Jones are available for your perusal.
Two concepts stood out. Concerns about trust that many in the AI governance community and broader society have raised about the increased use of technology is core to his motivation. He is using his platform to inspire others to leverage their own platforms to create change, stating, "the question isn't whether our concept of trust is broken; it's whether we're willing to rebuild it for everyone's sake."
Harry drew on precedents from aviation, medicine and finance; industries where trust was not assumed but engineered. Trust did not emerge from good intentions alone. It was the product of governance structures, incentive alignment and deliberate decisions to put rules in place before the worst harms materialized. Building on the discussion with Maya Shankar and the power of choice in the face of change, the technology sector, he argued, cannot wait for behavior to change on its own.
Second, when asked about what makes him hopeful, he shared that it was "everyone in this room." The people governing technology, ensuring privacy, preventing cyberattacks. It was encouraging to hear that those who have devoted their careers to digital governance are not only being recognized but are also part of what is inspiring hope.
A tool is not a moral object
In conversation with IAPP Vice President and Chief Knowledge Officer Caitlin Fennessy, CIPP/US, legendary author Salman Rushdie shared that a tool is not itself a moral object. This is the concept that will probably stay with me the longest. It again builds on the idea that we all have agency, that technology itself is simply a tool, and it is up to all of us — both on the individual and collective levels — to decide how it is wielded.
Rushdie spoke about his own evolution of privacy. After his near-fatal attack, he said that his experience reshaped how he thinks about the context in which concepts like privacy should be understood, and he reminded us that privacy and other harms cannot be assessed in the abstract.
The right decisions about how much, or how little, privacy a person needs must be made with full awareness of the circumstances they are navigating. What is appropriate in one context is entirely wrong in another. He then reminded us that not everyone has the same access to privacy. He shared the realities of growing up in India where privacy is not always available to people in less fortunate circumstances.
It was a good reminder that we shouldn't repeat the challenges of the built world in the digital world. For more, the IAPP's Alex LaCasse reported on Rushdie's conversation.
Digital governance professionals are creating change
While the inspirational keynotes picked up on many themes of the conference, the people doing the work and sharing their experiences built upon these concepts.
In several panels, the question of how to deal with privacy and other AI governance principles in practice was met with real-life examples. When should synthetic data be used to protect someone's privacy? If training a model with personal information will save a person's life, is it acceptable to use this data? These are no longer theoretical questions. Many professionals are starting to draw lines on what these limits are within each of their organizations.
From discussions about AI vendor contracts to best practices when building risk assessments for AI implementation, there were similar themes. Understand the objectives that you are trying to solve with these technologies. Work across teams to pull in the right subject matter experts at the right time, and don't do compliance for compliance sake.
Getting more granular
It's probably important to note that it's not just the IAPP community that is evolving. A significant part of this year's depth is due to evolving governance needs.
One panel dove specifically into how AI governance implementation is changing. They spoke about the set of triggers that digital governance teams increasingly need to track: changes in data sources, improvements in harm feedback from a wider range of sources, shifts in core functionality and model performance, better understanding of third-party risk, changes in legislation across geographical regions, and the question of timing. When should reviews happen and how frequently? This discussion presented different scenarios, and one of the panelists, Andrew Gamino-Cheong, shared some best practices.
In addition to overarching best practices and framework discussions, we wanted to get into some sector-specific discussions. I was pleased to host a discussion exploring how financial institutions are approaching AI governance, asking whether they are building entirely new processes, or augmenting privacy, legal and risk frameworks already in place. More details on the discussion here.
Looking ahead
My final takeaway was that AI governance professionals are not done with change. I've started to think and write about this more, but the idea of using AI as a part of the AI governance process came up in a lot of my discussions.
What does this shift mean for the future of the AI governance profession?
Questions about future literacy requirements for AI governance professionals and the training of the AI agents were on people's minds. Understanding where and how to use these AI agents, when and for what are they better at reviewing than humans.
During this year's conference there were more questions than best practices. However, it seems clear that future conferences, likely before we get to Global Summit 2027, will start to provide more examples of where AI governance professionals are working alongside agentic digital governance professionals.
What does this mean for us?
Change will be a constant for our profession, but as many of our keynotes emphasized, we have the agency to make that change be a positive for us and for society.
Additional IAPP Global Summit 2026 posts worth reading
Joe Jones on LinkedIn.
New Irish Data Protection Commissioner Niamh Sweeney addresses scrutiny over her appointment, shares agency priorities, by Jedidiah Bracy.
FTC Commissioner Meador stresses agency preference for 'case-by-case' enforcement, by Joe Duball.
European Data Protection Office on LinkedIn.
'About bloody time': Prince Harry welcomes lawsuits against tech firms, by the Guardian.
Barbara Cosgrove on LinkedIn.
This article originally appeared in the AI Governance Dashboard, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Ashley Casovan
Managing Director, AI Governance Center
IAPP



