Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Deepfake technology, leveraging artificial intelligence to create hyper-realistic synthetic media, has emerged as both an innovation and a challenge to global governance. It has significant implications for privacy, misinformation, identity fraud and societal trust.
The term "deepfake" often carries negative connotations and different jurisdictions — particularly those in Australia, China, the EU, Singapore, the U.K. and U.S. — are navigating governance challenges posed by the technology through the three key pillars of prevention, protection and response.
Legal, regulatory and ethical frameworks are also evolving to manage the risks associated with deepfake technology while upholding privacy rights and trust in the digital age.
Prevention: Deterring deepfake misuse
Transparency and disclosure requirements. Governments are implementing legal frameworks to prevent deepfake-related deception before harm occurs. The EU Artificial Intelligence Act, set to fully take effect in August 2026, mandates that AI-generated or manipulated media be clearly labeled unless used for artistic or journalistic purposes. China's Provisions on the Administration of Deep Synthesis Internet Information Services include similar language, requiring AI-generated content to be labeled and mandating identity verification to prevent anonymous misuse. Platforms must embed digital watermarks to ensure traceability.
Australia has not introduced explicit deepfake labeling laws, but its Online Safety Act grants the eSafety Commissioner authority to establish "Basic Online Safety Expectations." These expectations guide platforms in adopting best practices, such as watermarking and content provenance tracking, to enhance transparency in AI-generated media.
Election integrity and misinformation laws. Several jurisdictions have introduced election-specific regulations to counter AI-driven disinformation. The EU's Digital Services Act mandates platforms mitigate AI-generated disinformation, particularly during election cycles. France's Law Against the Manipulation of Information, often referred to as the Fake News Law,allows courts to remove misleading AI-generated political content. Singapore's Protection from Online Falsehoods and Manipulation Actenforces labeling requirements and removal of deceptive deepfake election content.
In the U.S., free speech protections complicate deepfake regulation at the federal level. However, several states have enacted targeted measures. Texas's Senate Bill 751prohibits deceptive deepfakes in political campaigns, while Minnesota's House File 1370criminalizes the distribution of materially deceptive deepfake media depicting election candidates.
Protection: Safeguarding rights, privacy and trust
Privacy and personal data protections. Deepfakes raise privacy risks when likenesses and voices are manipulated without consent. The EU General Data Protection Regulation and China's Personal Information Protection Lawclassify biometric data, such as facial images and voice recordings, as sensitive personal information, requiring legal justification or explicit consent for processing. The GDPR's right to erasure enables individuals to request the removal of unauthorized deepfake content.
The U.K.'s Online Safety Actcriminalizes the creation and distribution of nonconsensual sexually explicit deepfakes. Australia's Criminal Code Amendment(Deepfake Sexual Material) Actimposes penalties of up to six years' imprisonment for creating, possessing or distributing nonconsensual AI-generated intimate content. Singapore's Penal Code (Amendment) Act similarly criminalizes nonconsensual intimate deepfakes.
Data protection and biometric rights. Singapore's Personal Data Protection Actmandates consent before collecting or using biometric representations, aligning with global standards. The GDPR classifies biometric data as sensitive, requiring explicit consent, relevant to unauthorized deepfake creation.
In The U.S., Illinois' Biometric Information Privacy Actand California's Consumer Privacy Actregulate biometric data use, impacting AI-driven impersonation. China's PIPL mandates consent for biometric data use, complementing its deep synthesis provisionsrequiring deepfake labeling. These regulations safeguard biometric privacy, ensuring deepfake technologies comply with legal protections while balancing innovation and security.
Platform accountability and regulation. Governments are holding online platforms accountable for deepfake content. The EU's DSAmandates platforms label AI-generated content and mitigate associated risks. China's deep synthesis provisions require AI providers to label synthetic content, verify identities, and prevent deceptive deepfakes.
The U.K.'s OSArequires platforms to mitigate risks posed by synthetic media, particularly where such content may mislead users. Singapore's POFMA enables authorities to issue correction orders or takedown notices for misleading deepfake content that affects elections or national security.
Australia's Online Safety Actempowers the eSafety Commissioner to issue takedown notices for nonconsensual deepfakes, though it does not mandate labeling. Singapore's POFMAenables authorities to issue correction orders and takedown notices for deepfake misinformation affecting public interest or national security.
In the U.S., while no comprehensive federal law specifically addresses deepfakes, legislative efforts such as the DEEPFAKES Accountability Act are ongoing. Section 230 of the Communications Decency Act grants platforms immunity for user-generated content but does not preclude federal deepfake regulations.
Response: Enforcement, litigation and future governance
Criminal and civil enforcement. Governments are enforcing deepfake-related laws through criminal prosecutions and civil litigation targeting fraud, defamation and regulatory violations.
The U.S. Financial Crimes Enforcement Network has warned about the increasing use of synthetic media in financial fraud schemes. Singapore's act to amend the Penal Codecriminalizes identity theft involving synthetic media, ensuring fraudulent impersonation using AI-generated content is subject to prosecution.
In parallel, the U.S. Federal Trade Commission has also acted against AI-driven deception through Operation AI Comply, an initiative targeting fraudulent AI practices and misleading claims.
Courts worldwide are addressing reputational harm caused by deepfake content. Australia applies defamation laws to manipulated digital media that falsely portrays individuals, while its Criminal Code Amendment (Deepfake Sexual Material) Act imposes up to six years' imprisonment for coercion, blackmail or harassment using deepfakes. Singapore's defamation laws similarly apply to AI-generated content intended to cause reputational harm.
Platform regulation and compliance. Governments are imposing stricter requirements on digital platforms to moderate deepfake content. The EU's DSA mandates that platforms remove harmful deepfake content and implement risk assessments to mitigate AI-driven disinformation. Cyberspace Administration ofChinaregulations require platforms to label and remove deceptive deepfake content, with noncompliance leading to penalties.
Future considerations
As deepfake technology advances, regulatory frameworks must evolve to address emerging risks while supporting innovation. Governments are shifting from reactive enforcement to proactive regulation, incorporating real-time monitoring, automated content verification and standardized watermarking to enhance transparency.
AI developers are increasingly being held accountable, with discussions on liability emphasizing ethical AI deployment and mitigation of unintended harms. Future regulatory developments may include global standards, cross-border enforcement mechanisms, and AI-generated content authentication protocols.
As deepfakes become more sophisticated, international cooperation will be essential in building resilient governance structures that balance security, privacy and technological progress.
Claudia Koon Ghee Wee, AIGP, is based in Australia and specializes in AI engineering, AI assurance and governance.