Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
On 15 May, Brazil's data protection authority, Autoridade Nacional de Proteção de Dados, published Technical Note No. 12/2025, summarizing the results of a public call for contributions on the use of artificial intelligence and automated decision-making.
The document supports ongoing regulatory efforts under the ANPD's 2025–26 Regulatory Agenda and reflects the growing impact of algorithmic systems on data protection and individual rights in Brazil.
At the heart of this discussion is Article 20 of Brazil's General Data Protection Law, which guarantees individuals the right to request a review of decisions made solely through automated processing of personal data. While this right is already in force, its implementation in real-world AI scenarios remains unclear. The technical note helps move the conversation forward, offering insights into how this right might be interpreted and applied as AI technologies evolve.
The document draws from 124 contributions submitted by companies, civil society organizations, academic experts and public institutions. A shared concern is that automated decisions — especially those involving credit, employment or digital profiling — can have serious implications for individual autonomy, equality and privacy.
In this context, many contributors emphasized the importance of transparency, human oversight and algorithmic explainability. Human review, when required, should be meaningful, not merely symbolic, they noted. Several responses pointed out that a person must be able to actually understand and, if necessary, reverse the logic of the automated decision.
Still, the document highlights key areas of disagreement. There is no uniform definition of what constitutes a "solely automated" decision. While some argue that decisions derived from generative AI outputs may fall outside the scope of Article 20, others believe any automated outcome with real-world consequences should trigger the right to review.
Another point of tension involves how to reconcile the right to information with the protection of trade secrets. Many contributors advocated for a balanced approach — ensuring individuals receive clear and understandable information without requiring organizations to reveal proprietary algorithms or models.
When it comes to legal bases for processing personal data in AI systems, most stakeholders agree consent is difficult to implement in practice, especially when dealing with complex models and scalable systems. Legitimate interest was viewed as a more adaptable legal basis, provided appropriate balancing tests and safeguards are in place. There was also a call for more guidance on the responsible use of data scraping and the handling of sensitive data, particularly in high-risk contexts.
Governance was another core theme. Contributors recommended organizations document the entire AI life cycle, including training data, decision logic, safeguards and accountability structures. Data protection impact assessments — known in Brazil as personal data protection impact reports — were widely endorsed as a tool for identifying and mitigating risks, especially for high-impact use cases or applications involving vulnerable populations.
While the technical note does not impose new obligations, it offers a preview of how the ANPD may approach future regulation and enforcement. It emphasizes risk-based compliance, layered transparency, and the application of existing LGPD principles to new technological challenges.
It also sends a clear message: even in the age of advanced AI, data subjects must retain the ability to understand and contest decisions that affect their lives.
Tiago Neves Furtado, CIPP/E, CIPM, CDPO/BR, FIP, leads the Data Protection and Artificial Intelligence Team and the Incident Response Team at Opice Blum Advogados.