TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | The five big data stages of adjustment to the GDPR Related reading: How Dynamic Data De-Identification Is a Bridge To the Future

rss_feed

""

""

On January 31, I took part in an IAPP-hosted webinar on managing risk and big data analytics under the EU General Data Protection Regulation alongside Gwendal Le Grand, the Director of Technology and Innovation at the CNIL, France's data protection authority, and Mike Hintze, former Microsoft chief privacy counsel and now partner at Hintze Law. 

Based on interactions with companies and regulators following the webinar, we found that companies are at varying stages of adjustment to the upcoming regulation. Understanding these five stages, we believe, can help companies that control and process personal data find a solution to their needs. So, here we go:

Stage one — awareness 

At this stage, a company is aware that the GDPR contains new protections for EU data subjects and threatens significant fines and penalties for non-compliant data controllers and processors. Since compliance enforcement commences in the spring of 2018, many companies are stuck at this stage and are postponing moving to a solution. Yet preparation for compliance with the GDPR should begin now. 

The company is usually also aware of the GDPR’s broad jurisdiction. It applies to all companies processing personal data for one or more EU citizens, regardless of where the company is located or has operations.

The company is also aware that penalties for noncompliance can include fines of up to four percent of global gross revenues, along with class-action lawsuits, and direct liability for both data controllers and processors for data breaches, data breach notification obligations, and so forth. 

Stage two — acceptance

A company at this stage realizes it cannot rely on prior approaches or legal bases for data analytics, artificial intelligence, or machine learning. While consent remains a lawful basis under the GDPR, the definition of consent is significantly restricted. Under the GDPR, consent must now be “freely given, specific, informed and an unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her.”

These requirements for GDPR-compliant consent are not satisfied if there is ambiguity and uncertainty of data processing, as is the case with data analytics, artificial intelligence, or machine learning, or what we'll refer to as big data henceforth. These heightened requirements for consent under the GDPR shift the risk from individual data subjects to data controllers and processors.

Prior to the GDPR, risks associated with not fully comprehending broad grants of consent were borne by individual data subjects. Under the GDPR, broad consent no longer provides sufficient legal basis for big data. Data controllers and processors must now satisfy an alternate legal basis for big data processing.

Stage three — understanding requirements 

At this stage, a company  appreciates that the GDPR does provide a means to continue big data processing, provided that GDPR requirements for “legitimate interest” are supported by satisfying two new technical requirements: pseudonymisation and data protection by default. 

  • GDPR Article 4(5) defines pseudonymisation as requiring separation of the information value of data from the means of linking the data to individuals. The GDPR requires technical and organizational separation between data and the means of linking the data to individuals. Traditional approaches like persistent identifiers and data masking do not satisfy this requirement, since correlations between data elements are possible without requiring access to separately protected means of linking data to individuals. The ability to re-link data to individuals is referred to as the correlative effect, re-identification via linkage attacks. It is also referred to as the Mosaic Effect, because the same party who has access to data can link the data to individuals.
  • GDPR Article 25 imposes a new mandate for data protection by default. It requires that data must be protected by default and that steps are required to use it, as opposed to the pre-GDPR default, where data is available for use by default and steps are required to protect it. It requires that those steps enforce use of only that data necessary at any given time, for any given user, and only as required to support an authorized use, after which time the data is re-protected.

Stage four — evaluating technology

A company at this stage is evaluating technology to determine if it satisfies GDPR requirements for both pseudonymisation and data protection by default.

Pseudonymisation requires separating the information value of data from the ability to attribute the data back to individuals.

Data protection by default requires revealing only that data necessary at a given time, for a given purpose, for a given user, and then re-protecting the data.

Stage five — ensuring continuity of operations

Companies at this stage of adjustment are seeking to verify that technology vendors satisfy GDPR requirements for pseudonymisation and data protection by default, so that by using the technology they can ensure ongoing continuity of operations.

What stage is your company at? 

photo credit: Visual Content Data Breach via photopin (license)

1 Comment

If you want to comment on this post, you need to login.

  • comment Bradley Josephs • Feb 24, 2017
    Gary, great article. Do you have any experiences you can share where companies have successfully pseudonymized data? How have you seen this done successfully?