How the FCC and FTC regulate AI-powered robocalls


Contributors:
Kathleen Scott
CIPP/US, CIPT
Partner
Wiley Rein
Stephen Conley
Aly Apacible-Bernardo
CIPM
Privacy and Data Policy Associate
Meta
Artificial intelligence promises to be a game-changer for robocalling and robotexting in both good and bad ways. Responsible actors use AI to identify and stop harmful calls. Recently, the U.S. Federal Trade Commission hosted a Voice Cloning Challenge to promote multidisciplinary approaches to protecting consumers from AI-enabled voice cloning harms, such as fraud and the misuse of biometric data. The winners of the challenge included companies that use AI to differentiate between real voices and synthetic ones.
At the same time, bad actors use AI to exploit consumers. For example, deepfake calls, in which AI technology is maliciously leveraged to clone a person's voice, represent the perils of AI in the robocall space. An example of this is the series of AI-powered deepfake calls, purported to be from President Joe Biden, made to New Hampshire voters ahead of the primary election.
Policymakers and enforcement agencies at the federal and state levels are exploring the intersection of AI and robocalling/texting, with new and proposed laws emerging in numerous places. Nineteen states recently enacted laws to regulate synthetic media and deepfake technology related to elections. This year, Utah and Colorado also enacted laws that require disclosures when certain AI tools are used to interact with consumers. Several bills have also been introduced in the U.S. Congress to address the issue of fraudulent deepfakes and other AI-powered fraud calls, including theĀ Do Not Disturb Act, introduced by Rep. Frank Pallone, D-N.J., which would "require disclosure of the use of AI to emulate human interaction over text or phone."
Contributors:
Kathleen Scott
CIPP/US, CIPT
Partner
Wiley Rein
Stephen Conley
Aly Apacible-Bernardo
CIPM
Privacy and Data Policy Associate
Meta