Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Digital clones — whether lifelike avatars, voice replicas or interactive chatbots — present exciting opportunities for creators to interact with fans, offer one-on-one coaching at scale and continue their legacies online. For companies, however, the generation and operation of these digital clones presents novel challenges under laws — both new and old — relating to privacy, rights of publicity and artificial intelligence transparency.
Companies that create digital clones face risks around identity verification, exploitation and misuse, and should be aware of key compliance areas and security considerations.
Biometric information collection
The creation of a digital clone or replica may involve analyzing photos, videos and/or audio recordings to generate a realistic, digital replica of an identifiable individual. This digital clone may mimic the individual's appearance, gesticulation and/or voice. Companies that create digital clones should evaluate whether the process involves the creation of biometric information based on photos, videos or audio.
Illinois' Biometric Information Privacy Act, for example, applies to biometric information, meaning certain biometric identifiers — including voiceprints or facial geometry — that can be used to uniquely identify an individual. Under the law, companies that collect biometric information have a number of obligations, including obtaining consent to collection and processing of the data, avoiding selling or otherwise profiting from it and adhering to specific retention periods.
Failure to comply with the BIPA may result in significant exposure. Plaintiffs have a private right of action and may be entitled to statutory damages of up to USD5,000 per violation or actual damages, whichever is greater. The risk of getting sued in Illinois for noncompliant collection of biometric information is high given the active class action plaintiffs' bar and significant statutory damages under the law.
Texas and Washington have similar statutes regulating the commercial use of biometric information, though without a private right of action.
Separately, the collection of biometric information may trigger requirements under comprehensive state privacy laws, such as the California Consumer Privacy Act. Many of these laws have special requirements regarding the collection and processing of sensitive categories of data, including providing consumers with the ability to opt in or out of the processing of their biometric information and carrying out a data protection impact assessment. Washington's My Health, My Data Act also regulates biometric data — which is expansively defined — and, like Illinois' BIPA, has a private right of action.
Capturing and commercializing an individual's likeness
Triggered by the increasingly sophisticated and widely available AI tools that let anyone create a convincing deepfake, state legislatures are concerned about the use of AI to generate and commercialize unauthorized digital representations of individuals.
While most states have general impersonation laws that would likely apply to deepfakes, in the last few years, several have amended their right of publicity laws to create causes of action against parties creating unauthorized digital replicas of individuals. Some of these laws require specific clauses to be included in agreements with creators.
Under California's Assembly Bill 2602, an agreement that allows for the creation and use of a digital replica of an individual's voice or likeness in place of work the individual would otherwise have performed in person must include a reasonably specific description of the intended use of the digital replica — subject to certain exceptions.
Without this description, the contract may be unenforceable unless the individual is represented either by: legal counsel who negotiated on their behalf, with clearly and conspicuously commercial terms stated in a contract or other writing signed or initialed by the individual; or a labor union representing workers who do the proposed work, and the terms of their collective bargaining agreement expressly addresses uses of digital replicas. Companies should carefully craft agreements with creators to ensure enforceability.
Disclosing a digital clone's artificial identity to consumers
Certain laws require companies to inform consumers when they are interacting with AI. For example, companies that allow digital clones to interact with California residents may be subject to the Bolstering Online Transparency Act, which prohibits the use of a bot to communicate or interact online with an intent to mislead about the bot's artificial identity to knowingly deceive to incentivize a purchase or sale, or to influence a vote in an election. Companies are not liable under the act if they provide clear, conspicuous and reasonably designed disclosure of the bot.
Similarly, once relevant provisions of the EU AI Act take effect, providers and developers of certain AI systems will face transparency and disclosure obligations. Providers of certain AI systems will be required to inform individuals they are interacting with an AI system, unless this would be obvious to "reasonably well-informed, observant and circumspect" individuals considering "the circumstances and context of use." In particular, where there is a risk a consumer may be misled, companies should evaluate whether they have an obligation to disclose the clone's artificial identity. For most companies, being transparent and upfront about artificial identities will be the safest approach.
Recording communications between digital clones and consumers
Companies that capture audio recordings of communications between digital clones and consumers may trigger state recording or "wiretapping" statutes. While some states have recording laws that require only one party to consent to the recording, others have "two-party consent" laws, which require all parties to the conversation to consent.
Companies that record audio between consumers and digital clones should consider whether and when to provide notice and obtain consent to the recording.
Preventing abuse and fraud in AI clones
Deepfakes and voice clones have been used by fraudsters to persuade consumers to purchase products, to perpetrate financial fraud and to extort consumers. In fact, the U.S. Federal Trade Commission estimated impersonation scams resulted in USD1.1 billion in losses in 2023.
The FTC Act's prohibition on deceptive or unfair conduct could apply to companies that make, sell or use tools that are "effectively designed to deceive," the agency has signaled, even if that is not the intended or sole purpose of the tool. FTC guidance encourages companies to take reasonable precautions before a product hits the market, which could include implementing built-in features to prevent misuse.
Moreover, certain privacy laws — both domestic and international — impose affirmative requirements to disclose and label AI-generated content.
Authentication and identity verification
Companies that allow creators to generate digital clones should carefully evaluate the creator onboarding process. In some cases, it may be prudent to verify the identity of the individual seeking to create the digital clone — whether through biometric or ID verification — and thereafter ensure there are robust authentication methods when they seek to access, modify or operate their digital clone.
Watermarking AI-generated content
The FTC has highlighted potential methods companies can use to address security and fraud concerns in AI-enabled voice cloning, including leveraging prevention and authentication techniques like "watermarking."
Watermarking involves embedding an identifying mark into a piece of media to track its origin. Indeed, some laws — such as the EU AI Act — require providers of certain generative AI systems to ensure content is marked in a machine-readable format and detectable as artificially generated or manipulated.
The California AI Transparency Act — which goes into effect in January 2026 — will also require covered providers to include a latent disclosure in AI-generated content that conveys the name of the covered provider and the name and version of the generative AI system that created or altered the content, among other specified information.
Companies seeking to utilize watermarking should also carefully consider the type of information contained in the watermark, as inclusion of user-identifying information in the watermark may trigger additional privacy concerns or obligations.
Conclusion
Companies offering digital clones must contend with a quickly evolving patchwork of laws, requiring careful consideration of issues around sensitive data collection, transparency and mitigation of security risks.
Frida Alim, CIPP/US, is a senior associate in Gunderson Dettmer's Data Privacy Group.