Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
With Brazil's artificial intelligence bill still under discussion in Congress, its data protection authority, the Autoridade Nacional de Proteção de Dados, has taken a proactive step by releasing Technology Radar No. 3.
Issued in late 2024, the publication outlines the DPA's perspective on generative AI and its alignment with the country's General Data Protection Law. While not legally binding, the document provides important guidance organizations should not overlook.
Generative AI models, such as large language models, rely on large volumes of data for training, fine-tuning and use. The ANPD sees the data life cycle of these models as closely connected to the processing of personal data. This includes collecting, processing, sharing and deleting data. Each step involves specific privacy risks that must be managed in line with LGPD requirements.
In the data collection phase, the ANPD highlights the widespread use of web scraping tools that gather content from across the internet — often without checking whether that content includes personal or sensitive data. These datasets are often used without proper filtering or anonymization. The ANPD reminds organizations that even publicly available information is still subject to LGPD principles, especially when it comes to necessity, transparency and good faith.
During the processing stage, although training models usually hide raw data behind mathematical structures, there is still a risk of revealing personal data through techniques like model inversion or membership inference attacks. Moreover, AI models can generate synthetic content that looks very real, and in some cases, may affect individuals' reputation, privacy or rights.
Data sharing further complicates things. People might enter personal data into prompts or upload documents with sensitive content. The AI's responses may also include details that resemble personal information. And when companies reuse or share pre-trained models, they might unknowingly carry forward risks hidden in the original dataset. These situations call for strong internal governance and clear agreements between developers, providers and users.
When it comes to deleting data, the ANPD points out generative AI doesn't follow a simple beginning-and-end life cycle. Once data enters a system — through training, prompts or uploads — it might be reused later during model updates or refinements. Organizations need to rethink when data use should end, how long is reasonable to store data and whether user consent still applies as the system evolves.
The ANPD links these risks to key LGPD principles, like purpose limitation, necessity, transparency and accountability. The DPA recommends companies adopt technical and organizational safeguards and keep documentation — including data protection impact assessments — to show they are responsibly handling personal data.
Even without specific AI legislation, the ANPD's document helps fill in the gaps by showing how Brazil's current data protection rules apply to emerging technologies. For companies doing business in Brazil or handling residents' data, this is more than a policy note — it's a practical roadmap.
Technology Radar No. 3 isn't a list of rules, but it reflects the thoughts of Brazil's privacy regulator. This is a valuable early guide for compliance and a strongindicator of what may come next for companies using generative AI.
In short, generative AI should be built with considerations to privacy and data protection from the start. According to the ANPD, innovation and regulation go hand in hand. In Brazil, doing both — responsibly and transparently — is already the expectation.
Tiago Neves Furtado, CIPP/E, CIPM, CDPO/BR, FIP, leads the Data Protection and Artificial Intelligence Team and the Incident Response Team at Opice Blum Advogados.