Investigations around nonconsensual explicit deepfakes produced by social platform X's AI chatbot are intensifying.
The situation escalated in France, where cybercrime prosecutors and Europol reportedly executed a raid on X's Paris office while summoning Executive Chair and Chief Technology Officer Elon Musk for questioning over Grok's deepfake generation. On the same day, the U.K. Office of Communications and the Information Commissioner's Office advanced targeted probes of their own against X.
According to The Wall Street Journal, French prosecutors indicated the search of X's office is part of an effort aimed at bringing the platform and its chatbot into compliance with national laws. Forthcoming voluntary interviews with Musk, former X CEO Linda Yaccarino and other relevant employees are set for April. Those who forego their interviews may face future arrests.
France is coordinating with a broader European Commission probe into alleged bias in X's content algorithm that was recently expanded to include the Grok deepfake allegations. French authorities are also exploring potential charges related to national rules on the dissemination of child pornography and Holocaust denial.
During a recent European Parliament committee hearing on nonconsensual deepfakes, European Commission DG Connect Head of Unit for Digital Services and Platforms Prabhat Agarwal told lawmakers specific deepfake renderings that may violate national law could be reported for expedited removal under Article 9 of the Digital Services Act. However, he questioned whether other areas of the bloc's digital rulebook could be more effective with addressing the Grok allegations and other similar instances that may arise.
"We are challenged, not on the substance of whether Grok was generating illegal content, or whether this risk mitigation is effective or not, or whether this content is illegal, or whether this behavior is acceptable or not," he said. "It is whether we have used the right procedural vehicles in doing this."
UK ramps up
The Grok allegations are being explored thoroughly by U.K. enforcers. The ICO's new probe, launched 3 Feb., pertains to potential violations of the U.K. General Data Protection Regulation and the Data Protection Act while Ofcom's ongoing X inquiry focuses on Online Safety Act compliance.
The ICO explained its investigation will look into "whether personal data has been processed lawfully, fairly and transparently, and whether appropriate safeguards were built into Grok’s design and deployment to prevent the generation of harmful manipulated images using personal data." In a statement, ICO Executive Director Regulatory Risk & Innovation William Malcolm added his office has a role to play in cooperation with global partners, as "losing control of personal data in this way can cause immediate and significant harm."
"Our role is to address the data protection concerns at the centre of this, while recognising that other organisations also have important responsibilities," Malcolm said. "We are working closely with Ofcom and international regulators to ensure our roles are aligned and that people's safety and privacy are protected. We will continue to work in partnership as part of our coordinated efforts to create trust in UK digital services."
Ofcom opened its preliminary investigation 5 Jan. and remains engaged on the matter, noting in its 3 Feb. update that the office is "currently gathering and analysing evidence." The update included clear distinctions about the Online Safety Act's application to AI chatbots and the focus of its probe being on X's use of Grok and not the development of Grok by xAI, the social platform's AI subsidiary.
The case is expected to stretch across the next few months, with Ofcom offering a window into general proceedings.
"We must give any company we investigate a full opportunity to make representations on our case. If, based on the evidence, we consider that the company has failed to comply with its legal duties, we will issue a provisional decision setting out our views and the evidence upon which we are relying," the office wrote. "The company will then have an opportunity to respond to our findings in full, as required by the Act, before we make our final decision."
In the backdrop of the U.K. enforcers' work, the U.K. government added a new rule to the Data (Use and Access) Act prohibiting AI image generating services to offer capabilities that create "purported intimate images of an adult without consent or reasonable belief in consent." It also provides courts with "the power to make a deprivation order" over a nonconsensual image. The rule's effective date is 6 Feb.
Joe Duball is the news editor for the IAPP.


