ANALYSIS

Brazil weighs AI monitoring of domestic violence offenders, raising privacy and governance concerns

Brazil is considering legislation that would use AI-powered electronic monitoring, predictive analytics and digital alerts to enforce domestic violence protective orders, raising important questions around privacy and AI governance.

Published
Subscribe to IAPP Newsletters

Contributors:

Tiago Neves Furtado

CIPP/E, CIPM, CDPO/BR, FIP

Partner

Opice Blum

Brazil's Federal Senate is debating legislation that would introduce artificial intelligence into the enforcement of domestic violence protective orders. 

Bill No. 750/2026, introduced by Sen. Eduardo Braga and currently before Brazil's Chamber of Deputies, would establish the National Program for Monitoring Aggressors Using Artificial Intelligence, combining electronic monitoring devices, behavioral analytics and real-time alerts to authorities. The proposal aims to prevent domestic violence, "ensuring the effectiveness of emergency protective measures" and "expanding the protection of victims through the use of digital technologies and artificial intelligence." 

The initiative emerges at a time when governments worldwide are debating how AI should be used in public safety contexts. From predictive policing initiatives to algorithmic risk assessment tools in criminal justice systems, policymakers are increasingly exploring AI technologies to anticipate threats and improve enforcement. At the same time, these initiatives raise complex questions about proportionality, transparency and the protection of fundamental rights.

Brazil's proposal illustrates both the potential benefits and governance challenges of AI-driven public policy.

Domestic violence remains a serious and persistent issue in Brazil. Each year, courts issue hundreds of thousands of protective measures under the Maria da Penha Law, the country's primary legal framework addressing violence against women. These orders typically prohibit aggressors from approaching victims or specific locations.

In practice, however, enforcing these measures can be difficult. Violations often become known only after victims report them and response times may vary depending on local law enforcement capacity.

How it would work 

The proposed monitoring program under Bill No. 750/2026 seeks to address this gap by combining electronic monitoring technologies with AI systems capable of identifying violations in real time.

Under the proposal, aggressors subject to court orders could be required to wear electronic monitoring devices such as ankle bracelets. These devices would connect to a centralized monitoring platform capable of continuously tracking the aggressor's location and detecting when court-imposed distance restrictions are violated.

If the monitored individual approaches restricted locations or attempts to tamper with the monitoring device, the system would automatically generate alerts for authorities responsible for enforcement.

The objective is straightforward: reduce the time between the breach of a protective order and the intervention of law enforcement.

The proposal also introduces a digital safety tool designed for victims.

The legislation provides for the development of an official mobile app for individuals protected by judicial measures. Through the app, victims could trigger emergency alerts, share their location with authorities and receive notifications if a monitored aggressor approaches restricted areas.

The app could also provide access to a record of alerts and monitoring events related to the case.

Importantly, the proposal establishes that victims' use of the app would be voluntary and dependent on explicit consent. The system must also comply with applicable information security and data protection safeguards.

These features reflect a growing interest in technology-enabled victim protection. Rather than relying solely on reactive law enforcement, digital tools may help create a more proactive protection model capable of identifying risks earlier and facilitating faster responses.

One of the most innovative aspects of the proposal is the creation of a national database designed to analyze behavioral patterns of monitored aggressors using machine learning techniques.

Data generated by monitoring devices could be analyzed to identify patterns that indicate an increased risk of violence. Examples may include repeated attempts to approach restricted areas, unusual movement patterns or indications that monitoring devices have been tampered with.

If such patterns are detected, authorities could receive alerts even before a formal violation occurs.

This predictive approach reflects a broader shift toward data-driven public safety policies. Similar monitoring and predictive technologies have been explored in jurisdictions such as Spain, the U.K. and parts of the U.S., particularly in initiatives aimed at improving enforcement of protective orders and preventing domestic violence.

However, predictive technologies used in public safety contexts are also among the most sensitive applications of AI.

Systems that attempt to infer risk from behavioral data raise important questions regarding transparency, fairness and accountability. These concerns are central to global discussions on responsible AI governance.

Embedding AI governance safeguards

Brazil's proposal acknowledges some of these challenges by establishing governance requirements for the AI systems used within the program.

According to the legislation, algorithms used in the monitoring system must follow principles such as explainability, auditability, mitigation of discriminatory bias and human supervision over automated processes.

These requirements align with broader international discussions on responsible AI deployment, including frameworks such as the Organisation for Economic Co-operation and Development's AI Principles and emerging regulatory approaches such as the EU AI Act.

In the European regulatory framework, for example, certain AI systems used in law enforcement and risk assessment are classified as high-risk applications, subject to enhanced transparency, oversight and governance requirements.

Translating similar principles into operational safeguards will likely require detailed technical implementation and oversight mechanisms in Brazil as well.

Privacy, proportionality and data protection

From a privacy perspective, the proposed program would involve the processing of significant volumes of personal data.

Continuous geolocation monitoring, behavioral pattern analysis and records associated with judicial protective measures all involve sensitive information that must be handled carefully.

The proposal explicitly states that personal data processing within the program must comply with Brazil's General Data Protection Law, and that collected data may only be used for the purposes defined by law.

While this requirement provides an important legal safeguard, several operational questions remain.

Continuous geolocation monitoring raises questions about proportionality and data minimization. Retention policies for monitoring data will also be critical, particularly if behavioral analytics are used to identify patterns over extended periods.

Access control will also be essential. The program will likely involve coordination among multiple institutions, including law enforcement agencies, prosecutors and the judiciary. Ensuring that sensitive monitoring data is accessed only by authorized actors will be crucial for maintaining trust in the system.

Algorithmic accountability will also play an important role. If predictive analytics influence enforcement decisions or risk assessments, oversight mechanisms should exist to audit the system and address potential errors or biases.

The proposed bill also highlights the importance of institutional cooperation. The monitoring system would require coordination between security agencies, the judiciary, prosecutors and services responsible for support.

Such cooperation is essential for the program to function effectively. At the same time, multiagency systems often create complex environments for data sharing and governance.

Ensuring clear accountability structures, strong cybersecurity protections and transparent oversight mechanisms will be critical.

AI-enabled public policy

Brazil's proposal ultimately reflects a broader global movement toward AI-enabled public policy. Technologies such as electronic monitoring, predictive analytics and automated alerts are increasingly being explored as tools to strengthen public safety strategies.

At the same time, these technologies introduce new challenges related to surveillance, algorithmic decision-making and the balance between safety and fundamental rights.

Brazil's proposed monitoring program, therefore, illustrates a broader dilemma facing policymakers worldwide: how to harness the preventive potential of AI while ensuring that data protection, transparency and fundamental rights remain central to the design of AI-driven public safety systems.

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Tiago Neves Furtado

CIPP/E, CIPM, CDPO/BR, FIP

Partner

Opice Blum

Tags:

AI and machine learningSurveillanceLaw and regulationLGPDAI governancePrivacy

Related Stories