TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Tracker | GDPR conundrums: Processing special categories of data Related reading: Map the GDPR complaint process with new tool from the Westin Center

rss_feed
OneTrust_SmartPrivacy_banner_ads_300x250_20170818_
DPC17_WebBanner_300x250-COPY
PrivacyCore_ad_300x250-01

The regime for processing special categories of data has not changed, or has it?

At face value, the regime for processing special categories of data has remained unchanged under the EU General Data Protection Regulation. If you can understand the shortcomings of the current system and how the Article 29 Working Party has sought to resolve them, then you can detect how these shortcomings have now been addressed in the GDPR and a new changing landscape is revealed.

The GDPR, in fact, introduces the legitimate interest test for processing special categories of data through the back door. It will no longer be sufficient to have a legal basis for the processing of special categories of data, such as consent or performance of a specific agreement. Large scale processing of special categories of data also often requires a balancing of interests and the implementation of adequate safeguards to mitigate its impact on the privacy of the individuals whose data are processed.

Let’s see how the GDPR works and where the conundrum lies.

The current system

Under the Data Protection Directive, the processing of special categories of personal data (data revealing health, racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, etc.) is prohibited unless there is a specific legal ground to process such data. These grounds consist mainly of the consent of the individual, the performance of specific contracts (e.g., health data may be processed insofar as this is necessary for the performance of an insurance contract), or processing for specific purposes (e.g., racial data may be processed insofar as it is unavoidable in order to identify someone).

Why the regime for special categories of data is no longer adequate 

Increasingly, it is becoming unclear whether specific categories of data are sensitive. Rather, the use of data may be sensitive. Companies use potentially “innocent” data (some of which are freely available in the public domain) to make distinctions between individuals which are sensitive. An example is the U.S. data broker Acxiom, which reportedly can identify individuals with a predisposition to diabetes on the basis of their purchasing patterns. 

Furthermore, several types of data do not belong to special categories of data according to the law, but are undeniably sensitive because of the potential impact on individuals if the data are lost or stolen. This includes passwords for access to IT systems and websites, credit card details, Social Security numbers, passport numbers and so on. The sensitivity of data will often depend on the combination thereof because the data could then be used, for example, to produce convincing phishing emails. An email address is not in itself sensitive data but in combination with a password it becomes highly sensitive as many people use the same email/password combination to access different websites and systems. Then there are situations in which regular data suddenly becomes sensitive when it is linked to data that may be indirectly sensitive, such as nationality, country of origin and postcode.

Conversely, there are numerous examples where special categories of data are not sensitive when they are used for the purpose for which they have been collected, which means there is no need for a stricter regime. For example, pension records include the name and gender of the partner of an employee, thereby revealing the sexual orientation of that individual. For data controllers, the need for a special regime (with stricter security requirements) for this data is not always self-evident.

The debate about which categories of data should qualify as special is therefore becoming irrelevant. Practice shows that the same data may be sensitive in one context but not in another (particularly where data are combined). As a consequence, the existing regime — which is based on the processing of a pre-defined set of special categories of data — does not achieve the intended effect. The central issue in all of this is that the regime for the use of special categories of data does not include a check on whether there is a legitimate interest for that use. Under the Directive, the legitimate interest ground is not available as a legal ground for processing of special categories of data. Reviewing the specific legal grounds further shows that none of them require a contextual balancing of interests, which would include an assessment of the measures taken by the data controller to mitigate any adverse effects on the privacy of the individuals concerned. This assessment is deemed to have been made by EU legislators during the determination of the appropriate legal grounds. In this respect, the legitimate interest ground, contrary to what is often thought, can actually provide greater privacy protection for individuals. This has also been acknowledged by the WP29 (Opinion 06/2014, pp. 9-10). 

Solution WP29

The WP29 has attempted to overcome this problem by requiring that the protection of such data under Article 8 of the Directive (the regime for special categories of data) should not be less than if the processing had been based on Article 7 (providing the legal grounds for the processing of regular personal data). According to the WP29, the grounds in Article 7 apply cumulatively if the protection under Article 8 of the Directive is not as strong. The WP29 therefore does apply the legitimate interest test for data processing on top of the regime for special categories of data (Opinion 06/2014, pp. 15-16):

“… it is clear that the policy objective is to provide additional protection for special categories of data. Therefore, the final outcome of the analysis should be equally clear: the application of Article 8, whether in itself or in a cumulative way with Article 7, aims at providing for a higher level of protection to special categories of data.

In practice, while in some cases Article 8 brings stricter requirements…this is not true for all provisions. Some exceptions foreseen by Article 8 do not appear equivalent or stricter than the grounds listed in Article 7. It would be inappropriate to conclude for instance that the fact that someone has made special categories of data manifestly public under Article 8(2)(e) would be—always and in and of itself—a sufficient condition to allow any type of data processing, without an assessment of the balance of interests and rights at stake as required in Article 7(f) ... "

The position of the WP29 is understandable from a protection perspective, but it is equally clear that the manner in which this is achieved is not optimal. First, the legitimate interest ground of Article 7(f) does not always apply to the processing at hand. This is because some of the grounds in Articles 7 and 8 of the Directive are the same. For example, one of the grounds for data processing included in Article 7 is “the performance of a contract.” With regard to the special categories of data in Article 8, the contracts are more specific (but the test would not produce a different result). In this case, the legitimate interest test would not be applied. Both provisions also include the ground of “consent.” For special data, this is often the ground used to justify data processing, while such processing would not pass the legitimate interest test. 

The WP29 accommodates the ground of consent by specifying that the principles of Article 6 of the Directive are applicable (personal data must be processed fairly and lawfully, and the requirements of necessity and proportionality apply) (Opinion 06/2014, p. 13):

“... obtaining consent does not negate the controller’s obligations under Article 6 with regard to fairness, necessity and proportionality, as well as data quality. For instance, even if the processing of personal data is based on the consent of the user, this would not legitimise the collection of data which is excessive in relation to a particular purpose.” 

The WP29 has been inventive in coming up with extra tests to achieve the right result. In effect, that result comes down to the fact that there must be a legitimate interest for processing special data. It is therefore not surprising that the WP29’s opinion on the legitimate interest test, reveals that the nature of the data actually forms a relevant element when applying the legitimate interest test (Opinion WP29 06/2014, p. 38).

“ii) Nature of the data

It would first be important to evaluate whether the processing involves sensitive data, either because they belong to the special categories of data under Article 8 of the Directive, or for other reasons, as in the case of biometric data, genetic information, communication data, location data, and other kinds of personal information requiring special protection”

In light of the shortcomings discussed above and the additional requirements introduced by the WP29, the effectiveness and legitimacy of the GDPR would have been served by abolishing the separate regime for special categories of personal data. Instead, it would have been preferable if we could have moved to a framework whereby the legitimate interest ground would have been the principal (and only) test for the various phases of the life cycle of all types of personal data: collection, use, further use and destruction. This would have greatly simplified the system, with more compliance as a likely result (read more on these proposals).

The GDPR

The specific regime for special categories of data remains in place in Articles 9 and 10 of the GDPR. The shortcomings described above are, to a large extent, offset by the new requirement to perform a Data Protection Impact Assessment (DPIA), when a type of processing is likely to result in a high risk to the rights and freedoms of individuals (note that this is broader than privacy alone), and this is an explicit requirement in the case of large-scale processing of special categories of data (Article 35(3)(b) of the GDPR).

Additionally, data controllers must consult their supervisory authorities prior to starting data processing when the DPIA indicates that such processing would result in a high risk in the absence of mitigating measures taken by the controller (Article 36(1) of the GDPR). The DPIA requires a contextual assessment, including an assessment of the necessity and proportionality of the processing activities, an analysis of the risks to the rights and freedoms of individuals and the mitigating measures that the controller envisages to limit these implications (Article 35(7)(d) of the GDPR).

This assessment boils down to the legitimate interest test. It must therefore also be carried out if the legal ground for the intended processing of special categories of data is based on consent or execution of a contract.

In effect, these additional requirements are being introduced through the back door by means of the DPIA. Again, although this is quite understandable from the perspective of data protection, the manner in which the GDPR seeks to achieve this goal is unnecessary complicated.

The conclusion is that the GDPR will not bring the required improvements in terms of legal complexity — quite the contrary.

The conundrum

Data controllers must consult their supervisory authorities prior to starting data processing when the DPIA indicates that such processing would result in a high risk in the absence of mitigating measures taken by the controller. Read this sentence a few times and you will see that it is totally unclear whether the element “in the absence of” means that the supervisory authority must be consulted (i) regardless of the effectiveness of the mitigating measures, therefore also when the mitigating measures would indeed mitigate the risks, or (ii) only if the mitigating measures do not mitigate the risks. 

Apparently this duality was duly noted in the legislative process, but left undecided as the opinions varied widely, therefore all Member States could read into this provision whatever they wished. Now that the GDPR has been adopted and companies will have to start implementing it, it is time to get clear guidance.

The WP29 in its GDPR action plan has indicated that it will provide guidance on the DPIA requirement as one of four priority subjects. The logical interpretation would be to apply the consultation requirement only when the processing, after mitigating measures have been taken, still poses a high risk to individuals. The GDPR is based on the accountability principle, which requires companies to be accountable ex-post rather than a system whereby compliance is verified ex-ante by supervisory authorities.

Any other interpretation of the DPIA requirement in the GDPR would flood supervisory authorities with endless notifications, which they are not likely to be able to review. Imposing administrative burdens without adding value to material protection is a recipe for non‑compliance, which in turn undermines the legitimacy of the material rules that these norms aim to protect. Less is sometimes more.   

1 Comment

If you want to comment on this post, you need to login.

  • comment Stuart Ritchie • Dec 2, 2016
    Superb analysis, and thanks for your link to the SSRN paper, which I'm still digesting (I like "mechanical proceduralism").
    
    One thing I can't quite see: just because data falling into the special categories can be created inferred by algorithms (eg pregnancy deduced from cosmetics purchases) rather than collected or acquired, to me that seems a distinction without a difference (at least in the context of the GDPR). So long as it's created (whether true or false), stored and otherwise processed in marketing campaigns or for other purposes, it's still data and if it's not dealt with via DPIAs then controllers will pay the price. Of course, until that happens it may be difficult in practice for the controller to figure this out! Still, it should come out in the wash anyway via Articles 13 and (more typically) 14, where the profiling is particularized and notified to the data subject.