IAPP AI Governance Global Europe 2026
DUBLIN
1-4 June
How to Detect and Reduce Bias in Human-Centric AI Models: Introducing FHIBE
Wednesday, 3 June
14:45 - 15:45 GMT
Intermediate level
AI has moved from experimentation to deployment, influencing decisions, access and experiences in almost every aspect of society. As AI integration accelerates, companies and institutions developing or using AI must focus on fairness, transparency, global representation and utility while adhering to ethical and regulatory guidelines worldwide. Providing a roadmap for addressing these issues, this panel will discuss Sony AI’s latest research, “Fair human-centric image dataset for AI benchmarking,” the first AI evaluation dataset made with ethics in mind. The researchers behind this ethically sourced, consent-based dataset will explore how it can be used by AI governance professionals and developers to evaluate human-centric computer vision models for biases and stereotypes. They will also share how this dataset can serve as an example for responsible and ethical protocols across the entire data lifecycle – from sourcing (including consent and compensation) to curation (including annotation and privacy preservation), and from management to use in evaluating models and training bias mitigation tools.
What you will learn:
- Understand the global urgency to focus on fairness, transparency, representation, utility and adherence to ethical and regulatory frameworks in AI development.
- Discover new research introducing a practical tool that helps AI governance professionals and developers identify and mitigate bias, ensuring fairness in their models.
- Gain practical insights from AI researchers and developers on how to curate datasets that align with today’s ethical and regulatory requirements.
Moderator and speakers

Tiffany Georgievski
AI Governance Counsel
Sony

Austin Hoag
Senior AI Engineer
Sony

Victoria Matthews
Senior AI Policy Specialist
Sony

Alice Xiang
Global Head of AI Governance
Sony