IAPP AI Governance Global Europe 2026

DUBLIN

1-4 June

Back to conference agenda

Who Watches the Watchers: Governance for Human in the Loop

Thursday, 4 June

14:30 - 15:30 GMT

Intermediate level

BREAKOUT SESSIONAI GOVERNANCEAI LITERACYAI AND MACHINE LEARNINGBENCHMARKINGFRAMEWORKS AND STANDARDSRISK MANAGEMENTSTRATEGY AND GOVERNANCETECHNOLOGY

In current AI laws and regulations, “human in the loop” is generally employed as the critical mechanism for ensuring safety, accountability and alignment with fundamental human rights and ethical values. It is particularly the case in high-risk or high-stakes AI applications, such as those used for health care, critical infrastructure, law enforcement, or employment decisions. For example, the EU AI Act explicitly mandates human oversight for "high-risk" AI systems to prevent or reduce risks to health, safety, or fundamental rights. Broadly speaking, HITL is intended to ensure accountability (e.g., approving, correcting, or overriding the AI's output by a natural person or the deploying company) and fairness (e.g., identifying, mitigating and correcting bias embedded in the data and algorithms). But is your company ensuring that the HITL isn’t the one introducing the bias or making it worse?

What you will learn: 

  • How to identify bias introduced by human reviewers into AI oversight processes. 
  • How to design governance frameworks that keep “human in the loop” interventions accountable. 
  • How to implement HITL oversight in line with global AI regulations (EU AI Act, NIST AI RMF, ISO/IEC 42001) without creating new compliance “blind spots”.

Moderator and speakers

generic profile silhouette

Adam Bagwell

AIGP, CIPP/US

AI and Privacy Counsel

Pivotal

generic profile silhouette

William Dummett

CIPP/E, CIPP/US, CIPM, CIPT

AVP, Digital Legal Office Attorney Lead

Eli Lilly and Company