TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | German state DPA guidance on protected usable data post-'Schrems II' Related reading: US surveillance law and the future of trans-Atlantic data flows

rss_feed

""

""

Recent guidance by Germany’s Baden-Württemberg Commissioner for Data Protection and Freedom of Information instructs EU entities they cannot lawfully transfer data to cloud, software-as-a-service or other technology providers not organized under the laws of the EU, European Economic Area or an equivalent protection country, rendering off-limits providers from the U.S. and U.K. This applies unless the “exporting” EU data controller and technology data “importer” implement safeguards that overcome the inability of standard contractual clauses to protect data subjects’ fundamental rights against surveillance.

The LfDI guidance stipulates that “in cases where the controller cannot provide suitable protection even with additional measures, he must suspend/terminate the transfer; [and] if such an adequate level of protection is not ensured, the data protection supervisory authority must suspend or prohibit the transfer of data if the protection cannot be achieved by other mean.” 

The implication of these requirements extends across the EU. Finland’s Office of the Data Protection Ombudsman recently sent letters to seven technology companies (three U.S. technology companies — Amazon Web Services, Microsoft and IBM; and four Finnish/Scandinavian technology companies — Telia, Elisa, Tietoevry and Valtori) requesting information about their response to the landmark “Schrems II” decision. These letters also reminded these companies that if they fail to stop transfers where data is not adequately protected, the data protection authority has both an obligation and clear authority to stop the processing. 

The LfDI guidance goes on to state that “although a transfer based on standard contractual clauses is conceivable, it will only rarely meet the requirements that the ECJ has set for an effective level of protection;” and “in this case, the controller must provide additional guarantees that effectively prevent access by the U.S. secret services and thus protect the rights of the data subjects.” Under the guidance, this would be conceivable in the following cases: “encryption in which only the data exporter has the key, and which cannot be compromised even by U.S. secret services; [and] anonymization or pseudonymization, where only the data exporter can determine the assignment.”

We use the term “protected usable data” to encompass the requirement of the LfDI guidance that identification of data subjects is not possible (anonymization) or not possible without access to additional information (encryption or pseudonymization) retained by the EU-based data exporter. The guidance concludes with the warning that absent guarantees embodied in protected usable data, the Baden-Württemberg commissioner will prohibit the transfer of data based on standard contractual clauses.

The three forms of protected usable data identified in the LfDI guidance are encryption (impervious to surveillance), anonymization and pseudonymization (under the exclusive control of the data exporter).

The EU Parliament highlighted this week the EDPB will not be suspending enforcement due to the lack of any regulatory grace period; both binding corporate rules and standard contractual clauses require “additional measures to compensate for lacunae [gaps] in protection of third-country legal systems” and failing that “operators must suspend the transfer of personal data outside the EU.” The EU Parliament stressed that absent additional measures capable of defeating “cryptanalytic and quantum computing efforts of intelligence agencies,” the advice from the Berlin, Hamburg and Dutch DPAs is to halt transfers to the U.S., with the Berlin DPA going further and advising the retrieval of all data from all services owned or operated by U.S. entities.

While not directly binding for EU DPAs, it is helpful to review the published positions of Max Schrems’ privacy advocacy organization, NOYB, since it is the notably successful protagonist initiating legal action leading to the invalidation of Safe Harbor in 2015 and Privacy Shield in July 2020. NOYB most recently filed lawsuits against 101 companies in August 2020, all for failure to protect the rights of data subjects. NOYB takes the following positions on these matters: 

  • Encryption: NOYB challenges the notion that encryption can be an effective technical safeguard given the purported ability of the U.S. government to break encryption. More importantly, though, with respect to meeting the challenges of “Schrems II,” encryption alone is not a viable solution because when encrypted, data is protected at rest and in transit but has no utility. And when decrypted to provide utility in use, data is no longer protected.
  • Anonymization: NOYB highlights the U.S. government’s use of “selectors” to surveil EU personal data. These selectors may be “strong” (like email addresses, IP addresses or phone numbers) or “soft” (like indirect identifiers, keywords or attributes that by themselves do not identify a particular person, but when combined with other data can lead to reidentification). The use of “selectors” makes the use of anonymized data for “Schrems II” compliance difficult, if not impossible.

The reason anonymization has difficulty supporting the requirements for protected usable data is the prerequisite of determining the “perimeter of use” within which the data will be considered “anonymous.”

This prerequisite exists because adequately protecting data beyond a “perimeter of use” to ensure that no information from outside of the perimeter can be used to re-identify individuals necessarily require such aggressive protection of indirect identifiers (i.e., soft selectors as defined above) via masking, data generalization, suppression and perturbation that the data will have minimal residual value.

And, in a classic Catch-22, if attempts are made to rescue data value while protecting it beyond the perimeter by enabling later reversal to promote reidentification, even under controlled conditions by the data controller, then the purported anonymization will fail to satisfy the definitional requirements for anonymization in EU General Data Protection Regulation Recital 26. It states “ ... information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable” taking into account "all the means reasonably likely to be used, such as singling out, either by the controller or by another person." 

The highlighting of pseudonymization in the guidance underscores the importance of the widely unnoticed redefining of the term by the GDPR. It no longer refers to the pre-GDPR privacy-enhancing technique of replacing direct identifiers with tokens to “anonymize” data within a perimeter. To comply with the new GDPR pseudonymization requirements in Article 4(5), both direct identifiers and indirect identifiers (i.e., "soft” selectors) must necessarily be protected using technical and organizational safeguards so that it is not possible to attribute data “to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organizational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.”

The LfDI guidance takes a clear public position on the capability of pseudonymization to enable post-"Schrems II" compliance. Given the expanded GDPR requirement that pseudonymization applies to the entirety of datasets, it seems more than reasonable to conclude that GDPR-compliant pseudonymization can help to deliver protected usable data — namely, data where the identification of data subjects is not possible without access to additional information retained by the EU-based data exporter.

Moreover, because attribution to data subjects subject to authorization and controls is expressly provided for as part of pseudonymization, the utility of protected usable data is fully preserved for the exporting data controller without loss of protection for data subjects.

GDPR-compliant pseudonymization-enabled protected usable data protects the fundamental rights of data subjects, fully preserves data utility, including secondary processing results for artificial intelligence and machine learning comparable in accuracy, fidelity and value to processing source data but without requiring the use of identifying information. It also enables the processing of data for privacy-respectful, commercial-related artificial intelligence, machine learning, analytics and other secondary applications that do not raise national security concerns.

A number of EU regulators and industry professionals spoke about the importance of pseudonymization, as newly redefined under the GDPR, in a recent LinkedIn article. In fact, it is a core element of the vendor-neutral Data Embassy proposal submitted to the EDPB for "Schrems II" compliant “additional measures,” the findings of which the EDPB Secretariat confirms “have been circulated to the relevant expert subgroup.”

For those interested in learning more about how to implement technical and organizational measures necessary to support GDPR-compliant pseudonymization, the EU Agency for Cybersecurity has provided recommendations on shaping technology according to GDPR provisions (November 2018) and guidance on pseudonymization techniques and best practices (November 2019). 

Photo by Daniel Seßler on Unsplash


Approved
CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT
Credits: 1

Submit for CPEs

1 Comment

If you want to comment on this post, you need to login.

  • comment Gary LaFever • Sep 20, 2020
    Several readers have correctly pointed out that pseudonymisation existed before the GDPR. However, this article and the cited resources highlight that level of protection required to satisfy the new statutory definition of pseudonymisation in GDPR Article 4(5) is higher than what was required to satisfy pre-GDPR definitional requirements for the term. In working with members of the Article 4(5) drafting subcommittee, they stress that older, less-effective approaches like replacing different occurrences of the same data value with the same static token (e.g., "key-coding") do not satisfy the new heightened requirement for pseudonymisation (e.g., that only the original data controller can be able to re-link the data to a data subject - which would not be the case in the situation of using static tokens when they can be re-linked by outsiders via the Mosaic Effect) and are not entitled to the benefits allocated to GDPR-compliant pseudonymisation. The same can be said for "privacy by design" (PbD) and "data protection by design and by default " (DPbDD) as defined in Article 25. While DPbDD as now defined under the GDPR certainly constitutes pre-GDPR PbD, many approaches to PbD will fail to satisfy new requirements of DPbDD under the GDPR.
    
    Members of the Article 4(5) drafting subcommittee will point out that the term “Encryption” is used only 4 times in the GDPR - in Recital 83, Articles 6(4)(e), 32(a), 34(3)(a) - all of which uses relate to keeping data secure when at rest or in transit but not when in use. Of these, Articles 6(4)(e) & 32(a) also refer to “Pseudonymisation” to address keeping data secure when it is in use. In addition to these references to pseudonymisation in conjunction with encryption for securing data, pseudonymisation is used an additional 13 times, for a total of 15 times in the GDPR. References to pseudonymisation also appear in Recitals 26, 28 (twice), 29 (twice), 75, 78, 85, 156, and Articles 4(5), 6(4)(e), 25(1), 32(a), 40(2)(d), and 89(1) - which provide expanded data use or privileges for properly pseudonymised data, but only if it satisfies the heightened definitional requirements of Article 4(5). No other privacy-enhancing measure is given this much coverage or benefits under the GDPR, but it is necessary to satisfy the new heightened Article 4(5) definitional requirements. Otherwise, the data is not considered properly "Pseudonymised" for GDPR purposes and therefore is not entitled to the benefits of these provisions.