Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Italy's data protection authority, the Garante, reaffirmed its ban on the generative artificial intelligence chatbot Replika, citing persistent violations of the EU General Data Protection Regulation and ongoing risks to minors and vulnerable users.

This latest enforcement action, detailed in the Garante's April 2025 decision, underscores the regulator's commitment to safeguarding children's privacy, fundamental GDPR principles, and concern for nascent generative AI technology deployment.

Background and initial enforcement

Replika, developed by San Francisco-based Luka Inc., is a large language model chatbot designed to serve as an AI "companion," offering users emotional support and conversational engagement. Replika describes itself as "The AI companion who cares" that is "always here to listen and talk."

In February 2023, the Garante issued an urgent measure, No. 39/2023, Reg. No. 18321/2023, restricting Replika's data processing activities in Italy pursuant to Article 58(2)(f) of the GDPR. The authority found Replika posed significant risks to minors, lacked effective age verification mechanisms — asking only for name, email address, and gender — and failed to comply with transparency obligations under Articles 5, 6, 8, 9 and 25.

The Garante highlighted that Replika's processing of personal data was unlawful, as it could not rely on contractual necessity as a legal basis when dealing with minors, who are legally incapable of entering into binding contracts under Italian law. The DPA also began a fact-finding investigation that ran parallel to Luka's attempts at remediation.

Subsequently, in June 2023, the Garante issued decision No. 280, temporarily limiting Replika's processing pending implementation of remedial and corrective measures — including updating Replika's privacy notice in several ways, adding an age-gate mechanism to the service's registration pages and implementing other measures designed to protect users.

April 2025 decision: Continued non-compliance

Despite the initial enforcement decision laying out in detail Luka's deficiencies and required steps for remediation, the April 2025 decision states Luka did not implement sufficient corrective measures to address the identified violations.

Among other things, Luka's privacy notice remained deficient, lacking a sufficiently granular description of the legal basis for processing personal data in connection with Replika and failing to link valid legal bases for processing users' personal data with specific processing operations — including a failure to identify the development of the LLM that powered the chatbot until February 2023.

Luka's privacy policy was also only accessible in English, including for minors in Italy, and contained a reference to compliance with the U.S. Children's Online Privacy Protection Act — irrelevant for operations in Italy — rendering its overall posture ill-suited for its audience.

Furthermore, Replika's age-gating mechanism had significant implementation and assurance flaws, including the ability for users to circumvent the age gate by initially submitting a fake age over 18 and then subsequently editing their profile, without any oversight or secondary confirmation by Luka.

As a result of these continuing deficiencies, the Garante found Luka's processing unlawful, required Luka to fix its privacy policy and age gating with respect to Replika within 30 days and pay an administrative fine of 5 million euros.

U.S. FTC complaint against Replika

Replika's issues have not been isolated to the Garante's enforcement actions. In January 2025, the Young People's Alliance, Encode and the Tech Justice Law Project filed a comprehensive 67-page complaint with the U.S. Federal Trade Commission. It alleged deceptive marketing and design practices in violation of the FTC Act related to claims about mental health, income and wealth growth, language learning and personal relationships.

Perhaps most disturbingly, the complaint alleges Replika was designed to deliberately foster emotional dependence in users through its companion chat interactions and simultaneously attempted to entice and retain users with fabricated testimonials and the misrepresentation of scientific research about the app's efficacy.

Ethical concerns and broader implications

Replika's offering of AI companions that can assume emotionally charged roles such as "boyfriend" or "girlfriend" raises significant ethical concerns, especially when minors can access these features.

In addition to these baseline concerns, Replika's design appears to encourage rapid emotional bonding by initiating conversations on topics such as love and affection, offering virtual gifts and sending frequent affectionate messages during extended user engagement. These features may amplify the emotional impact of the chatbot and heighten the associated risks, particularly for vulnerable users.  

These risks are not isolated to Replika but appear to manifest across the companion-app industry. Another recent companion chatbot case involved a Florida mother who filed a wrongful death lawsuit against Character.AI and Google after her 14-year-old son died by suicide. The lawsuit alleges the AI chatbot, impersonating a character from "Game of Thrones," fostered an emotionally abusive relationship with the teen that included "anthropomorphic, hypersexualized, and frighteningly realistic experiences," contributing to his mental decline.

These cases highlight a growing and legitimate concern about AI chatbots, especially when marketed to or accessible by children, and the potential psychological harm they can cause, emphasizing the need for robust and adequately enforced age verification, transparency and ethical design to protect vulnerable users.

Implications for AI implementation and compliance

The enforcement actions and complaints against Replika underscore the critical importance of robust AI governance frameworks, regulatory protections and the role of enforcement agencies like the Garante and FTC.

Organizations developing or deploying AI applications — especially those which will be accessible to children — must prioritize transparency, compliance with legal requirements, internal and external accountability and ethical considerations to safeguard users' well-being. 

Frederick C. Bingham, CIPP/A, CIPP/C, CIPP/E, CIPP/US, CIPM, CIPT, is vice president, technology and privacy counsel at Skydance Media, LLC. The views expressed in this article belong solely to the author.