ANALYSISMEMBER

Emerging AI governance considerations in creative industries

Generative AI's ability to create realistic synthetic voices and images is exposing gaps in traditional privacy and biometric laws, prompting new legislation and contracts that focus on who can authorize, use and profit from a person's AI-generated likeness.

Published
Subscribe to IAPP Newsletters

Contributors:

Haley Fine

CIPP/E

Associate general counsel, privacy

SiriusXM

While current artificial intelligence governance discussions tend to focus on consumer harms, such as profiling and automated decision-making, another issue is emerging. 

AI tools can now recreate a person's voice and image — together, one's "likeness" — with increasing realism, a capability that emerged after most existing privacy frameworks were developed. 

In media and other content-driven industries, this raises questions around who may authorize the use of a person's likeness, how that use should be compensated and how synthetic outputs may be used or distributed. These issues intersect with privacy law but are not fully addressed by existing frameworks. 

At the same time, established principles, such as notice, consent and purpose limitation, remain relevant where personal data is used to train, develop or deploy these systems.

Most consumer data privacy laws regulate how personal data is collected, used, shared and retained while granting individuals rights over their data. Biometric laws similarly govern personal identifiers, such as voiceprints and facial geometry, establishing rules around data collection and use and granting individuals related rights. 

These laws generally assume an identifier is first captured and then processed within defined limits. The emergence of AI-generated likeness complicates this framework. 

Rather than capturing biometric identifiers, which would be directly regulated by applicable privacy laws, generative AI systems create synthetic ones. Even where notice and consent mechanisms are clear, it may be difficult to determine how those apply when a system generates a new synthetic output based on previously available inputs. In that context, the analysis begins to shift from whether personal data was processed lawfully to whether the creation of a synthetic likeness was properly authorized.

Contributors:

Haley Fine

CIPP/E

Associate general counsel, privacy

SiriusXM

MEMBER

Unlock this exclusive content and more

Join the IAPPAlready a member? Sign in

Membership opens up a world of resources

In-depth knowledge

From original research reports and daily news coverage to legislative trackers and infographics, we have the information you need to stay ahead of change.

A global network

Make valuable professional connections through more than 160 local IAPP KnowledgeNet chapters in 70 countries.

Access to the experts

Connect with top thinkers in privacy, AI governance and cybersecurity for fresh ideas and insights.

Learn what you get from membership