Following the well-known “Schrems II” case in the Court of Justice of the European Union, the policy objective behind recent regulatory interpretations of the EU General Data Protection Regulation transfers restriction is to prevent third-country authorities’ “excessive” access to EU residents’ data. But in this context, what matters more than physical data location is control of logical access to intelligible data.
Given the existence of networking and the internet, a person can have intelligible access to data remotely without having to be physically in the same location, or even same country, as that data. So, what matters most is authorities’ practical ability — through legal compulsion or otherwise — to require disclosure from whomever controls intelligible access, even if that person is in a different physical location from the data concerned and only has remote intelligible access. This may explain some unease regarding processing by U.S. parent companies’ European Economic Area affiliates. However, there is a bigger picture.
Globalization and deglobalization
Preventing all third countries from gaining effective jurisdiction over any EEA organization or their data isn’t possible unless those organizations never leave the EEA, because expansion outside the EEA subjects them to foreign jurisdiction. Those doing business in a country can’t ignore its laws and practices. But, does the EEA really want to make EEA organizations operate only within the EEA? Wouldn’t it want businesses to expand internationally, or is the true aim deglobalization and to retreat and raise drawbridges? Regional supply chains are important given the pandemic and current geopolitical situation, but it raises the question of whether deglobalization is the best option, or if it's even possible.
Data localization or protectionism?
Even the European Commission cautioned various third countries “against certain misconceptions of data protection that can lead to the introduction of protectionist measures such as forced localisation requirements.”
Moreover, the dire state of security in information and communications technology means inimical nation-states and cybercriminals (often state-sponsored) constantly gain intelligible access to EEA organizations’ personal data, even if hosted only within the EEA by EEA-established organizations without non-EEA connections. This threat to democratic societies is huge. EEA data location can’t guarantee security, or even enhance it.
Conversely, non-EEA-hosted data can be kept confidential, even from hosting providers, through encryption or techniques like European Data Protection Board-recognized multiparty computation, not-so-recognized homomorphic encryption (which enables processing of encrypted data) or confidential computing and trusted execution environments (which allow processing only within trusted environments that even hosting providers can’t access).
Attackers can’t be stopped by preventing transfers and requiring EEA-only hosting involving EEA-only organizations. However, attackers can be deterred by ensuring EEA organizations apply good technical, organizational and contractual security measures. Realistically, big U.S. cloud and technology vendors implement some of the best security measures available and still pioneer innovative security/privacy-enhancing technologies. Banning EEA organizations from using vendors with better security measures than their own seems counterproductive.
Everything is 'personal data'
Given the GDPR’s broad “personal data” definition, requiring absolutely no risk of third-country authorities attributing something like IP addresses to individuals is like asking me, dodgy knees and all, to jump to the moon. Denmark’s regulator said using U.S.-based servers allows linking of IP addresses to individuals via server firewall logs. That’s no more convoluted than how the Court of Justice of the EU previously suggested IP addresses could be linked to individuals. However, the key point isn’t server location or even attribution.
Real life is inherently risky
EEA regulators take a zero-risk approach to transfers. But, in real life, zero risk is impossible. Instead of banning vehicles because of a theoretical non-zero risk one could hit a child, we minimize risks through legal (speed limits, fines and jail time), organizational (crossing guards, teaching children about safety) and technical (crosswalks) measures.
Despite the zero-risk regulatory views, the GDPR’s approach to transfers and more is generally risk-based, as reflected in its provisions, preparatory documents, predecessor Data Protection Directive and early transfer-related regulatory guidance and decisions, including the Court of Justice of the European Union's pragmatic early decision.
Laws aren’t perfect: 'good enough' needn’t be 'essentially equivalent'
Post-“Schrems II,” certain standard contractual clauses can be used to legitimize transfers outside the EEA only if the destination country’s data protection laws have been assessed as “essentially equivalent” to the EU’s through what’s become known as transfer impact assessments. If a country’s laws are not found to be essentially equivalent, supplementary technical, organizational and/or contractual measures must be implemented to protect transferred data sufficiently. Otherwise, the data cannot be transferred.
Such assessments are not easy in practice. How close must third-country laws be to be considered “essentially equivalent?” If our speed limit is 25 mph and yours is 30 mph, would we ban our vehicles or kids from your roads? What about international comity? Advancing cross-border enforcement cooperation would be more effective than trying to force third countries to make their laws essentially equivalent to the EU’s.
Such cooperation is growing for data protection laws, as illustrated by the growth of the Global Privacy Assembly’s Global Cross Border Enforcement Cooperation Arrangement. “Good enough” laws or, mutual agreements to enforce infringements in another country’s territory, seem more practical to help enhance overall global compliance with “good enough” laws than prohibiting transfers unless risks are zero.
If laws are determined “essentially equivalent” but practices fall short, the EDPB's Jan. 1, 2020 recommendations recommendations require supplementary measures for transfers. If, conversely, laws are not essentially equivalent but in practice authorities don’t (or are very unlikely to) access transferred data, shouldn’t low practical risk be enough to allow transfers?
The recommendations acknowledged the latter, when saying transfers are permitted without supplementary measures to countries lacking “essentially equivalent” laws “if you consider that you have no reason to believe that relevant and problematic legislation will be applied, in practice, to your transferred data and/or importer.” So, it’s unclear why EEA regulators’ enforcement decisions, in requiring zero-risk for transfers, are backtracking from their own initially stated position.
The elephant in the room: Data will flow anyway
The GDPR exempts national security processing. Based on the EU Charter of Fundamental Rights, CJEU decisions like “Privacy International” and “Spacenet” restrict general and indiscriminate retention of communications data, even for serious crime or national security purposes. Nevertheless, EU intelligence agencies can and do exchange data with third-country intelligence agencies. Data still flows to third countries, whether or not their laws are “essentially equivalent.”
Prioritization of resources
Resources aren’t limitless. Consider the likelihood, scale and severity of consequences for individuals of different threats. Consider the average citizen's main concerns. U.S. authorities’ theoretical ability to access IP addresses, EU employee data and publicly available social media posts, or hostile nation-states and cybercriminals amassing identities, financial data and intellectual property. Why prioritize data localization and transfer restrictions over security? Data requires better security, regardless of location.
Life must go on, even with an unlevel playing field
Data transfers are modern society’s lifeblood. Many businesses spend huge amounts of money trying to comply with “Schrems II” (though who wouldn’t want to help feed us privacy professionals?) Yet billions of transfers to countries without an “adequate” status occur every second, with impunity and without SCCs, data TIAs, supplementary measures or any other recognized transfer tools. These transfers are made available by exporters lucky enough to not be targeted by individual, non-government organization or regulator investigations.
Is this “who gets away with it” situation fair or indicative of a “good law”? Are regulatory interpretations that make full compliance virtually impossible, bar halting internet usage, helpful or do they compound the problem? Will better data protection really result from data exporters creating more physical/digital paper by signing SCCs and documenting TIAs, or from stopping transfers to, say, the U.S., assuming they could do this in reality? Alternatively, regulators’ scarce resources could be spent on security measures like encryption, rate-limiting login attempts, long passwords, storing passwords hashed and salted and zero-trust security steps like identity verification, multifactor authentication, least privilege access rights, network segmentation and more.
Strong security alone obviously won’t prevent organizations with intelligible access from disclosing data to public authorities. There, commitments to resist disproportionate requests are also needed. But in democratic societies, some access by authorities is necessary to combat terrorism and serious crimes in order to protect rights and freedoms. Furthermore, other rights and freedoms under the Charter of Fundamental Rights of the European Union like the rights to non-discrimination and property, freedom of thought and expression, and even freedom to conduct a business are no less important than rights to privacy and data protection.
So, it seems to go too far to say that any possible access by authorities is excessive. The difficulty is where to draw the line. How much access is needed for access to be “excessive” and what methods of access are, or are not, considered “excessive?” Can there be further well-tempered, global multistakeholder discussions on what’s “good enough,” rather than unilaterally imposing “essentially equivalent?”
It’s a historical accident
The GDPR’s transfers restriction was inherited from the DPD. The DPD’s 1990 draft applied only to personal data physically located in the European Community, the predecessor to the EU. Controllers could avoid DPD compliance by moving data outside EC territory, doing something unlawful with it there, then perhaps bringing it back. Thus, the transfers restriction was included as an anti-circumvention measure.
However, the DPD’s 1992 draft recognized basing jurisdiction on data location made no sense, even during the internet’s toddlerhood. Accordingly, it based jurisdiction instead on the controller’s establishment in the EC, or use of equipment physically located in the EC. Yet the final DPD retained the transfers restriction. Why? The European Commission considered it necessary to stop controllers using third-party processors outside the EC for unlawful processing.
Of course, that view is based on a misconception — controllers subject to the DPD must comply wherever the processing occurred, whether direct or through a processor. Because of that erroneous view, a transfers restriction made it into the DPD and thence the GDPR.
The GDPR’s transfers restriction is clearly now unnecessary for anti-circumvention purposes. It’s morphed into a stand-alone rule where compliance is insisted upon for its own sake, even if key GDPR principles can otherwise be met. Its enforcement nowadays seems motivated by concerns about third countries’ “excessive” access to EEA residents’ data. But as argued above, it’s not enough to focus on stopping transfers to third countries where there’s a theoretical risk of authorities’ access. Indeed, that diverts attention and resources away from security measures that would deter malicious attackers, third country or otherwise. There may be other political reasons, such as protectionism and the understandable desire to boost the EEA’s home-grown technology industry. However, it’s not there yet and will probably take a long time to get there, assuming it can ever catch up with sector leaders. Generally, third-country vendors still offer better, more secure services, with more features and lower prices than similar EEA providers. Unsurprisingly, many EEA organizations feel they have no choice but to use them, particularly with uncertain economic times ahead.
To be clear: I’m not saying regulators don’t enforce security breaches. Of course they do. And I’m not suggesting third-country (or indeed EEA/U.K.) authorities should be allowed to collect data in bulk or conduct mass surveillance willy-nilly. Of course I’m not. But wouldn’t the limited time and resources of organizations, regulators, courts and governments be better directed toward implementing — and requiring said implementation — security measures appropriate to data’s sensitivity and risks, and improving cross-border enforcement of privacy laws?
Rather than one-sidedly considering transfers only from a narrow, zero-risk perspective to improve overall protection for EEA/U.K. personal data wherever it is located, wouldn’t it be more productive and realistic to focus instead on an equilateral “transfers triangle,” with one side being strong security (including zero trust security), one side being commitments by those with intelligible access not to disclose data indiscriminately and one side being mutual cross-border enforcement.
Dr. W. Kuan Hon, author of Data Localization Laws and Policy, is Of Counsel with Dentons and participates in the U.K. Department for Digital, Culture, Media & Sport’s International Data Transfers Expert Council. However, these are her personal opinions only. Thanks to Antonis Patrikios for his invaluable comments.
If you want to comment on this post, you need to login.