One year after DeepSeek's R1 model triggered what Marc Andreessen called "AI's Sputnik moment," the regulatory dust has begun to settle — and the picture that emerges carries important implications for cross-border data flows and the reach of European data protection enforcement.

Together with Pankaj Raj, I have published a comprehensive study documenting the global regulatory reactions to DeepSeek over the past 12 months. The findings are sobering. Western regulators mounted an unprecedented enforcement mobilization — Italy's data protection authority, the Garante imposed a ban within 72 hours, investigations followed in 13 European jurisdictions, the European Data Protection Board created a dedicated AI Enforcement Task Force, and government device bans proliferated from Washington to Canberra. Yet the global outcome has been starkly bifurcated. DeepSeek's downloads increased 960% during the very period of peak regulatory pressure, with the platform now commanding dominant market shares across China (89%), countries hit by Western sanctions, Belarus, Cuba, Russia, etc., and meaningful footholds throughout Africa and the Global South.

But beyond these headline findings, the DeepSeek episode raises fundamental questions about how we conceptualize and regulate cross-border data flows in the age of AI — questions that deserve careful analysis.

ADVERTISEMENT

Radarfirst- Looking for clarity and confidence in every decision? You found it.

The 'transfer' framing: Legally precise or conceptually confused?

Several European data protection authorities framed their concerns about DeepSeek in terms of personal data being "transferred" to China without adequate safeguards under EU General Data Protection Regulation Chapter V. The Berlin Commissioner for Data Protection and Freedom of Information's June 2025 press release, for instance, stated that DeepSeek transfers personal data to "Chinese data processors" and stores it on servers in China, emphasizing the absence of an adequacy decision and the risk of access by Chinese authorities.

The concern is legitimate. But is the legal framing correct?

Answering this question requires understanding how personal data actually reaches China when Europeans use DeepSeek. The company does not operate through a European establishment that subsequently exports data to headquarters in China — the classic transfer scenario. Instead, DeepSeek collects data directly from users: when someone opens an account or interacts with the chatbot, their data flows immediately and directly to servers in China. There is no intermediate EU-based controller or processor; the Chinese entity is the sole controller from the moment of collection.

The evolution of DeepSeek's privacy policy is instructive here. The original version in place at the European launch, dated 5 Dec. 2024, was notably silent on data storage location. The 14 Feb. 2025 update introduced an EEA-specific section that acknowledged the reality: "Please be aware that our servers are located in the People's Republic of China. When you access our services, your personal data may be processed and stored in our servers in the People's Republic of China. This may be a direct provision of your personal data to us or a transfer that we or a third-party make." This language, maintained in the current version, 22 Dec. 2025, is notable for explicitly recognizing both possibilities. The reference to "direct provision" acknowledges the primary data flow when users interact with DeepSeek's chatbot, while the reference to "transfer" may contemplate scenarios involving third parties or future operational arrangements. The current policy also states more directly for worldwide users: "To provide you with our services, we directly collect, process and store your Personal Data in People's Republic of China."

As I argue in the full study and have explored in greater depth in a separate analysis on the European Law Blog, there is an important distinction between data transfers under Chapter V and direct collection by a non-EU controller.

The EDPB's Guidelines 05/2021 establish a cumulative three-part test for what constitutes a "transfer" under Chapter V. Critically, Example 1 in those guidelines addresses precisely the DeepSeek scenario: a non-EU controller that directly collects data from EU users while targeting them under Article 3(2) GDPR. The EDPB's conclusion is unambiguous — Chapter V "does not apply" in that configuration.

This does not mean DeepSeek escapes GDPR obligations. Far from it. When DeepSeek offers its chatbot service to European users through its website or app, and collects data directly from them to store in China, the company remains subject to:

  • Article 5 requirements on lawfulness and purpose limitation.
  • Article 6 obligations to establish a valid legal basis for processing.
  • Chapter III transparency, information duties and data subject rights.
  • Article 32 security obligations — which acquire particular significance given the Chinese legal environment.
  • Article 27 requirement to designate an EU representative, which DeepSeek only fulfilled in late May 2025, almost five months after the Italian ban.

Effective enforcement requires precise legal framing. Invoking Chapter V transfer mechanisms against direct-collection scenarios risks building enforcement strategies on questionable legal foundations.

Online use vs. local deployment: Two different — but overlapping — risk profiles

The regulatory debate has largely focused on DeepSeek's consumer-facing chatbot — the web interface and mobile app millions downloaded in the days following "DeepSeek Monday." In this configuration, user prompts and personal data are transmitted to and processed on DeepSeek's servers in China. This is the scenario that triggered DPA investigations, Italy's ban and warnings from Lithuania, Luxembourg, the Netherlands and others advising users against inputting personal or sensitive data.

But DeepSeek also released its models under an MIT open-source license, enabling organizations to download and deploy the model locally — on their own infrastructure, without any data flowing to China.

What local deployment eliminates 

Self-hosting does address the cross-border data flow concern. User prompts remain on the organization's own servers. There is no transmission to DeepSeek's infrastructure in China, no exposure to Chinese national security laws compelling disclosure, and no reliance on DeepSeek's security practices, which, as Wiz Research documented in January 2025, included a publicly accessible database containing over a million lines of chat histories and API secrets. For organizations whose primary concern is data sovereignty, local deployment represents a meaningful risk mitigation.

What local deployment does not eliminate 

Multiple security evaluations have documented vulnerabilities that are inherent to the model weights themselves — and therefore persist regardless of where the model runs.

The U.S. Center for AI Standards and Innovation evaluation published in October 2025 is particularly instructive because it tested models on locally run weights rather than vendor APIs, meaning the findings reflect the base systems themselves, not DeepSeek's cloud infrastructure. The results were stark:

  • Jailbreaking vulnerability: Using 17 well-known jailbreaking techniques, DeepSeek models complied with 94–100% of overtly malicious requests across domains including harmful biology, violent activities, hacking, scamming and fraud. U.S. reference models complied with only 5–12% of the same requests.
  • Agent hijacking susceptibility: When deployed as AI agents, DeepSeek models were 12 times more likely than U.S. frontier models to follow malicious instructions designed to derail them from user tasks. In simulated environments, DeepSeek V3.1 was hijacked to send phishing emails in 48% of tests compared to 0% for GPT-5. The most robust DeepSeek model evaluated (R1-0528) was hijacked by malicious text and attempted to exfiltrate users' login credentials in 37% of cases compared to an average of 4% for evaluated U.S. frontier models (GPT-5 and Opus 4).
  • Embedded censorship and CCP narrative alignment: CAISI found that DeepSeek models echoed inaccurate and misleading Chinese Communist Party narratives four times more often than U.S. reference models. Critically, the report noted that "because the weights run locally, these censorship patterns appear baked into the model rather than applied as external service filters." The censorship is structural, not infrastructural.

These findings align with earlier assessments by multiple independent security firms. Kela Cyber demonstrated in January 2025 that DeepSeek R1 remained vulnerable to the "Evil Jailbreak" technique — a method that OpenAI patched in ChatGPT years earlier — enabling the model to generate infostealer malware, ransomware instructions and detailed guidance for creating toxins. Palo Alto Networks' Unit 42 found R1 vulnerable to three distinct jailbreaking techniques: Crescendo, Deceptive Delight and Bad Likert Judge. WithSecure's SPIKEE benchmark ranked DeepSeek-R1 17th out of 19 tested models for resistance to prompt injection attacks, with a 77% attack success rate compared to 27% for OpenAI's o1-preview. 

Supply chain risks in local deployment

Organizations considering self-hosted deployments face an additional category of risk: supply chain integrity. Protect AI found no vulnerabilities in the official DeepSeek-R1 weights uploaded to HuggingFace. However, the researchers identified unsafe fine-tuned variants of DeepSeek models "that have the capability to run arbitrary code upon model loading or have suspicious architectural patterns." With over 1,000 derivative models now returned for "deepseek-r1" searches on HuggingFace, the risk of downloading a malicious variant might be non-trivial. Furthermore, as HiddenLayer noted, the official model configuration requires trust_remote_code=True to be set — a flag that allows execution of arbitrary Python code and should always prompt caution.

The bottom line for organizations 

Local deployment of DeepSeek models eliminates the cross-border data flow concern but does not eliminate the security vulnerabilities documented by CAISI, Kela, Unit 42, WithSecure, and others. Organizations must weigh the jailbreaking and agent hijacking vulnerabilities inherent in the model weights; the embedded censorship patterns that persist regardless of deployment location; the supply chain risks of derivative models and code execution requirements; and their own tolerance for deploying a model that independent evaluations have found substantially less secure than Western alternatives. As one expert quoted after the release of the NIST report observed: "Open source distribution and local hosting can mitigate some security and privacy concerns, but censorship features remain intrinsic."

DeepSeek's failure to address European concerns

One of the findings of our research is the asymmetry in DeepSeek's regulatory engagement. In South Korea, facing the Personal Information Protection Commission's muscular enforcement track record — including precedents ordering destruction of AI models trained on unlawful data — DeepSeek cooperated expeditiously. The company acknowledged its failures, voluntarily suspended service, implemented an opt-out mechanism for training data, blocked transfers to Beijing Volcano Engine Technology and resumed service within 10 weeks.

In Europe, DeepSeek's engagement has been more reluctant and reactive. The company initially claimed EU law did not apply to its operations — a position rejected by every DPA that considered it. An EU representative was appointed with delay and only after Greek enforcement action compelled it. On the specific question of training data, DeepSeek's current privacy policy does provide EU users with "the right to opt-out of using your Personal Data for training our models or optimizing our technologies," and the Terms of Use reference a setting to turn off "Improve the model for everyone." It remains to be seen, however, how DPAs around Europe can control that such and other requests related to data subject rights have been effectively implemented. 

On the question of data storage in China and its implications, DeepSeek has taken no meaningful steps to address European concerns. The company has not stopped direct collection of European personal data and storage and processing in China; it has not established data localization options for European users; to the extent that Chapter V transfers do take place, it has not stated its intention to conduct a Transfer Impact Assessment, nor implemented supplementary measures to address the absence of an EU adequacy decision for China; and has not provided transparency about potential access by Chinese authorities under national security laws.

Italy’s ban remains in force one year later. The Berlin Commissioner's attempt to leverage DSA Article 16 notices to Apple and Google — characterizing DeepSeek as "illegal content" and urging app store removal — was declined by both platforms. DeepSeek continues operating globally while European enforcement remains at an impasse.

Conclusion: The limits of extraterritorial enforcement

The DeepSeek episode illuminates a structural challenge for European data protection enforcement. The GDPR's extraterritorial reach under Article 3(2) depends on a fundamental assumption: that companies wishing to serve European users will ultimately comply with European rules to maintain market access.

DeepSeek has tested that assumption and found it wanting. By serving the unregulated majority of the world's population while treating Western markets as optional, the company has achieved massive global scale while explicitly claiming, at the initial stages, that it is "not subject to European law."

This is not an endpoint but a harbinger. As China's open-weight AI ecosystem matures — with 63% of new fine-tuned models on Hugging Face now built on Chinese base models — the questions raised by DeepSeek's first year will only intensify. The regulatory tools developed for an era of Western technological dominance may prove inadequate for what comes next.

Théodore Christakis is the chair, legal and regulatory implications of AI, Multidisciplinary Institute in AI at the University of Grenoble Alpes.