Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
On 24 April, Brazil enacted Law No. 15.123/2025, introducing increased penalties in cases of psychological violence against women when committed using artificial intelligence or other technological tools capable of altering images, sounds or videos.
The law amends Article 147-B of the Penal Code of Brazil, which defines psychological violence against women. Under the existing provision, this offense is punishable by imprisonment from six months to two years, and a fine. The new law introduces an aggravating factor: if an offense involves the use of AI — such as deepfake technologies — or any means capable of manipulating the victim's image or voice, the applicable criminal penalty may be increased by up to half.
This legislative update reflects growing concerns about the misuse of AI to create manipulated media that can severely impact individuals' psychological well-being and personal reputation.
Although primarily addressing criminal liability, Law No. 15.123/2025 also brings broader implications for organizations working with AI technologies.
Data protection compliance. Organizations that develop, host or distribute AI tools capable of altering personal data, including audiovisual material, must implement safeguards to prevent misuse.
Technology governance. Companies offering platforms for user-generated content should reassess their monitoring and moderation practices to mitigate the risks posed by AI-generated or manipulated media.
Risk management. The manipulation of personal data through AI not only raises potential criminal issues but could also trigger regulatory sanctions under Brazil's General Data Protection Law if personal data is processed unlawfully or without adequate security measures.
Brazil's Law No. 15.123/2025 aligns with a broader global movement to regulate the harmful applications of AI technologies. Similar concerns are reflected in initiatives like the EU AI Act, which imposes transparency obligations for deepfake content, and legislative efforts in the U.S. targeting the malicious use of synthetic media.
This new legal development reinforces a growing expectation. AI governance must go beyond ethical principles and include clear accountability measures, especially when personal data manipulation is involved.
Law No. 15.123/2025 marks an important step in Brazil's evolving approach to AI regulation, specifically addressing the risks of using technology to perpetrate psychological harm. It serves as a timely reminder for privacy, cybersecurity and compliance teams to carefully consider emerging layers of legal exposure stemming from advanced technological tools.
Tiago Neves Furtado, CIPP/E, CIPM, CDPO/BR, FIP, leads the Data Protection and Artificial Intelligence Team and the Incident Response Team at Opice Blum Advogados.