TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Voice actors and generative AI: Legal challenges and emerging protections Related reading: Data protection issues for employers to consider when using generative AI

rss_feed

""

The disruption from generative AI within the entertainment industry is palpably clear. The Writers Guild of America recently reached a landmark settlement with the Alliance of Motion Picture and Television Producers on — among other things — the future acceptable uses and restrictions of artificial intelligence in the screenwriting process. Along with more recently settled agreements with the Screen Actors Guild, these discussions have been the first major attempts to systematically address the changes AI will bring to entertainment workers. However, amidst these negotiations, a significant but often overlooked group remains vulnerable: voice actors

Voice deepfakes are here

Synthetic voice technology can be used to clone the voices of actual humans as well as generate synthetic voices, creating "deepfakes" that can be used for fraud, identity theft, and financial scams. These capabilities pose real threats of misinformation and disinformation around public figures, as their voices and likenesses can be manipulated with minimal effort. Recently, this technology was used to imitate the world's most subscribed YouTuber, MrBeast, on both YouTube and TikTok to promote fake giveaways and surveys, using his likeness and verified accounts to gain credibility. The deepfake was so convincing it passed TikTok's strong ad-moderation technology. This technology can be applied in videos at all levels, from commercial movies to political ad campaigns. In addition, in the video game industry, voice actors are concerned about the potential impact of this technology on their work, as developers can use a database of voices to remix their past performances for use in future games. 

Streaming, podcasting and gaming: Open season on manipulating voices

With Spotify's new AI Voice Translation feature, select podcasts can be translated into other languages, not by speakers of that language, but in synthetic AI voices that match the original speaker's style. Spotify notes this creates a "more authentic listening experience that sounds more personal and natural than traditional dubbing." While the technology behind Voice Translation is impressive, some podcasters are concerned about the potential impacts of automated AI translations. Unauthorized cloning aside, there are risks when translation mistakes occur, now heard in the podcaster's own voice, especially if the translated audio is taken out of context or later presumed to be original. 

Voice actors also worry digital platforms may license their recordings to train generative AI systems that generate synthetic voices. Such synthetic voice models might reduce or render the already sporadic voice-acting opportunities obsolete. Additionally, with video games, modding allows fans to tailor or design their own game versions. There's escalating anxiety among voice actors that their voices might be repurposed for unsavory content or hateful rhetoric, all without their knowledge or consent.

Current legal protections in the US

Privacy laws

Biometric privacy laws in IllinoisTexas, and Washington classify "voice prints" as biometric identifiers. These voice prints, technologically distinct from simple voice recordings, are intricate mathematical representations capturing the specific factors of each individual's voice and are primarily used for identity authentication. While generative AI utilizes voice recordings for training to produce synthetic voices, it doesn't typically require these uniquely individualized voice prints. As such, the protections afforded by these privacy laws aren't sufficient to address the most common harms anticipated by the voice-acting profession. Washington's law explicitly omits audio recordings and any data generated from them. 

But for voice actors, voices are not simply personal data or used for identification. Many earn their livelihood by adopting multiple voices for the characters they portray. Thus, applying privacy laws to them is misplaced in principle, and in any case, recourse under them would not result in the desired compensation. 

Copyright protection

U.S. copyright law protects "original works of authorship fixed in any tangible medium of expression." Under this, the 9th U.S. Circuit Court of Appeals has held that a voice is not copyrightable since the sounds are not fixed. This interpretation leaves voice actors with potential copyright claims only for the actual recordings, the definition of which focuses on works resulting from the fixation of a series of musical, spoken or other sounds. 

To the extent that they would apply, voice actors' exclusive copyright protections in eligible audio-only sound recordings are those of:

  • Reproduction: The right to duplicate the sound recording in copies that directly or indirectly recapture the actual sounds fixed in the recording.
  • Derivative work: Works that rearrange, remix, or otherwise alter the sequence or quality of the original sound recording.
  • Public performance: This right is specific to digital audio transmissions, excluding traditional mediums like terrestrial radio or non-subscription broadcast transmissions.

Given generative AI's ability to create new outputs that may not easily be categorized as derivative or substantially similar, it may be challenging to argue for infringement even where these protections apply successfully. 

However, the definition protected under copyright notably excludes sounds accompanying motion pictures or other audiovisual works — essentially excluding voice acting work in television, motion pictures and video games. While voice actors may be recognizable as authors of the sound recordings (since their performances are featured), the employer or the party that commissioned the voice work is considered the author by default if it is part of a motion picture, audiovisual work or translation, and the contract expressly states that the sound recording is created as a work made for hire. In such scenarios, voice actors would find their potential claims to copyright effectively extinguished.

Torts: The right of publicity

Unlike copyright or statutory privacy laws, the traditional tort for the right of publicity holds some promise as a potential avenue of protection for voice actors. The right of publicity protects individuals from the unauthorized commercial use of their personal identity, encompassing a person's name, image, likeness, and, notably, voice. In the U.S., the right of publicity varies by state, offering protection either through common law or statute. The scope of these protections varies — for instance, some states require a person's identity to have commercial value, and some states require actionable violations to be commercial in nature. Where this right is defined by statute, its scope is limited to the specifications of that statute. For example, New York's right of publicity law did not include protection for a person's voice until 1995.

However, the right of publicity may still prove insufficient for voice actors since, as noted above, many perform using voices different from their own. Appropriating the voice of "Bart Simpson" is more of an infringement on the character's brand than an issue concerning the identity of the voice artist, Nancy Cartwright. And although specific phrases or jingles can be trademarked, general registering of a voice as a trademark is not allowed. Only in situations where their distinct, personal voice is replicated might voice actors might find legal recourse under the right of publicity laws. 

Statutory developments

The federal discussion draft of the Nurture Originals, Foster Art, and Keep Entertainment Safe Act aims to protect both voice and visual likeness from unauthorized AI. As it stands, the act could penalize those who produce or host "unauthorized digital replicas" of sound recordings. The act recognizes that digital replicas can be made of voices in sound recordings or audiovisual works, and it explicitly defines a "sound recording artist" as an individual who creates or performs in sound recordings for economic gain or livelihood. However, if a voice actor's distinct voice is modified to craft an entirely new synthetic voice distinguishable from the original, the current draft of the NO FAKES Act would not offer relief.

Legal protections in the UK

In the U.K., no laws explicitly recognize a standalone right of publicity. This means voice actors do not have an absolute right to control how their voices are used for commercial purposes. In addition, under English copyright law and in contrast to U.S. law, works generated by AI can be protected as "generated by computer in circumstances such that there is no human author of the work." Therefore, voice actors may likely be able to rely on only the "law of passing off" to safeguard their interests against entities using their voices to deceive the public. In most passing-off cases, the claimant must satisfy the "classic trinity" test, demonstrating the following:

  • They have a reputation or goodwill associated with their name or image.
  • There has been a misrepresentation to the public, falsely leading them to believe that the goods or services being offered are associated with the claimant.
  • The claimant has suffered some form of harm or damage.

Unless laws evolve to keep pace with technological advancements, relying on passing off claims to challenge the unauthorized use of an artist's voice or likeness may prove futile. The artist would need to demonstrate substantial reputation and goodwill associated with their voice or likeness, a hurdle often reserved for highly renowned artists. Additionally, they would need to prove that a significant proportion of the audience consuming the AI-generated content would be misled into believing it is the artist's authentic work. 

Other limited protections may be found in performers' rights that, among other things, prevent people from recording or broadcasting a live performance, recording a broadcast of a live performance, or copying a recording of the performance. Still, as with other legal options, this is limited in scope and application and largely insufficient in the age of AI.

Possible regulatory solutions

In the European Union, the General Data Protection Regulation gives voice actors enforceable legal rights over how organizations collect, process and share their personal data. To the extent that voice samples can be used to identify a person (outside unique technical identification of biometric data), they are considered personal data. If voice actors believe their data protection rights have been breached, they can lodge a complaint with their national data protection authority or seek redress in courts. 

As with all personal data, if an actor's voice falls under GDPR protections, any global organization has a legal obligation to respect those rights. Under the GDPR, protections for voice actors would include:

  • The right to be informed, "organizations must alert talents if they are collecting, processing or sharing their personal data."
  • The legal basis for collection, in this case, consent, must be updated at specified intervals.
  • The purpose and duration of processing must be specific, not unlimited.
  • The right to access.
  • The right to object and deletion of data.
  • The right to rectification.
  • Security — encryption and controls on access and sharing.

While these protections might not answer all the challenges raised around the collection of voices for training AI, and the impact of synthetic or cloned voices generated by AI, expanding them to other countries and jurisdictions would be a strong start toward at least basic protections. 

Additional steps might include voices and voice actors in the negotiated protections by unions, such as the recent writers and actors agreements. Ultimately, it will likely be necessary to consider whether either intellectual property rights might play a role in providing protections or if additional, targeted statutory protections are required.

Voice actors have limited recourse against the unauthorized use of their voices. To effectively advocate for themselves regarding generative AI, voice actors should continue to:

  • Educate themselves about generative AI. 
  • Negotiate contracts that give them some control over how their voices are used. 
  • Support organizations working to protect their rights.
  • Advocate for legislation that addresses the rights of voice actors. 

Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.