In 2016, the Westin Research Center published a series of articles identifying our analysis of the top 10 operational impacts of the European Union’s General Data Protection Regulation. Now, with the May 25, 2018, GDPR implementation deadline looming, the IAPP is releasing a companion series discussing the common practical organizational responses that our members report they are undertaking in anticipation of GDPR implementation.
This fourth installment in the 10-part series addresses privacy risk analysis, including, importantly, formalized risk management processes such as data protection impact assessments (known as DPIAs), as well as the newly legislated principles of data protection by default and by design. The first three installments of the series on data mapping and inventory, identifying legitimate bases for data processing, and building and data governance system can be found here.
Data processing risk assessments
The GDPR takes a risk-based approach to data protection. Under the regulation, data controllers’ and data processors’ responsibilities are calibrated to the likelihood and severity of the risks of data-processing operations. Recital 76 suggests that risks should be assessed in an “objective” manner “by which it is established whether data processing operations involve a risk or a high risk.” Where initial risk analysis flags a potential high risk to individuals’ privacy rights, it triggers a formal DPIA obligation under the GDPR.
Organizations should not only conduct risk assessments on a regular basis, but also each time they commence new personal data processing activities or implement new data processing tools. GDPR Recital 75 suggests risk “may result from personal data processing which could lead to physical, material or non-material damage, in particular: where the processing may give rise to discrimination, identity theft or fraud, financial loss, damage to the reputation, loss of confidentiality of personal data protected by professional secrecy, unauthorised reversal of pseudonymisation, or any other significant economic or social disadvantage...”
Initial risk assessments are appropriate before embarking on a more formal data protection impact assessment. This is because DPIAs are not required in all instances. Moreover, calling everything a DPIA may trigger unwarranted regulatory compliance obligations and dilute the operational significance of a meaningful DPIA.
The distinction between DPIAs, privacy impact assessments (PIAs), and risk assessments is not clear cut. Considering the complexity of the data processing operations and also the concept of risk itself, it’s understandable that people in the industry often lump all three together. In this piece, we use the term “risk assessment” to refer to the initial analysis of the data processing operations to determine the level of risk, and whether a DPIA should even be conducted.
How to conduct data processing risk assessments
Risk assessments can be conducted in a variety of ways, with manual paper processing, spreadsheets, email and in-person meetings, as well as with tools developed for such activities. They typically involve a standard set of questions designed to identify high risks to the rights and freedoms of data subjects. As recommended by the U.K.’s Information Commissioner’s Office, questions that should be asked prior to conducting a DPIA include:
- Does the program involve the collection of new information about individuals?
- Does the program require individuals to provide information about themselves?
- Does the program involve making decisions or taking actions that can have a significant impact on individuals?
For each risk identified (e.g., illegitimate access, loss of personal data, repurposing, data-based discrimination, etc.), a company should take account of the threat source and estimate the risk’s likelihood and severity. While there is no standard framework for conducting such an analysis, CNIL’s Methodology for Privacy Risk Management and NIST’s Risk Management Framework are two useful sources for guidance in assessing privacy and data protection risks.
Other risk analysis frameworks, such as the Future of Privacy Forum’s analysis of potential harms from automated decision-making, apply to certain organizational processes or operational concerns.
If the program involves new technology that could be perceived as invasive (e.g., biometrics or facial recognition), collects the kinds of information that can raise privacy concerns (e.g., health or criminal records), or contacts individuals in a way that they may find privacy-intrusive, a DPIA is likely called for.
When are DPIAs required?
After May 25, 2018, DPIAs will become mandatory under Article 35 of the GDPR for organizations that engage in data processing that is “likely to result in a high risk to the rights and freedoms of natural persons.”
The GDPR provides a non-exhaustive list of conditions that trigger a mandatory requirement to conduct a DPIA. For example, a DPIA will be required if data processing activities involve a “systematic and extensive evaluation of personal aspects” based on automated processing, the processing of large-scale sensitive data, or the “systematic monitoring of a publicly accessible area on a large scale.” Conversely, there are several circumstances where an organization is not obligated to conduct a DPIA. When the processing has already been authorized or has a legal basis in EU or Member State law, a DPIA is not required. Processing that is not likely to result in a high risk also does not trigger the DPIA requirement.
The Article 29 Working Party offers several rules of thumb that can serve as useful guidance for determining when data-processing activities meet the “high risk” standard. Namely, an organization should consider a DPIA if it engages in any of the following data processing activities:
- Profiling, evaluating, or scoring data subjects (e.g., for predictive purposes).
- Automated-decision making.
- Systematic monitoring.
- Processing sensitive data or data of a highly personal nature.
- Large-scale data processing.
- Matching or combining data sets.
- Processing data concerning vulnerable data subjects.
- Innovative uses or applications of new technological or organizational solutions to personal data.
How to conduct a DPIA
Organizations that have undertaken an initial risk assessment will typically have already satisfied the first task of a full-scale DPIA, which is to systematically evaluate the organization’s data-processing operations and their purposes. The results of the initial risk assessments as well as data-inventory and data-mapping processes can serve as a point of departure for a DPIA. If not already documented, any legitimate interests pursued by the processing should also be determined at this stage.
The GDPR does not provide a definition of the DPIA process, but it states that it should contain at least “a systematic description” of “the purposes” and an “assessment of necessity and proportionality” of envisaged data processing activities, as well as “assessment of the risks to the rights and freedoms of data subjects” and “measures envisaged to address the risks."
DPIAs are intended as a tool companies can use to manage risks to data subjects. Data controllers should “seek the views of data subjects or their representatives on the intended processing,” as well as consult with their data protection officers while conducting the DPIAs. It is also crucial to take into account the different risks that are unique to various stages of the information lifecycle, from collection to retention and dissemination.
DPIAs can and should be conducted with the involvement of both internal and external actors. The Irish Data Protection Commissioner (DPC) suggests that people with deep knowledge and expertise on the relevant operational projects should lead the DPIA. For organizations with insufficient experience and knowledge and for projects likely to involve high levels of risk for a large number of people, the Irish DPC also recommends the use of an external DPIA specialist. Moreover, it emphasizes the importance of “a wide internal consultation” to receive feedback from people that are involved in the processing operations, such as developers, engineers, and public relations teams, to better communicate with external stakeholders regarding the results of the DPIAs.
Some data protection professionals also rely on technological tools to assist them in the DPIA process. For example, both Avepoint and OneTrust offer PIA and DPIA automation tools for IAPP members that can be used to tailor the questions asked to a particular business, flag risks, and generate metrics and reports.
Once data risks are discovered and documented, an organization conducting a DPIA should proceed to determine and implement measures to address those risks. In general, risk treatment includes various measures, such as: deciding to not proceed with an activity that gives rise to the risk; removing the source of the risk; changing the likelihood and/or consequences of the risk; avoiding the risk; accepting the risk; or sharing the risk with another party or parties (such as through contracting or financing). However, some risk-mitigation measures create new risks themselves, so it is important to be cognizant of these as well. For example, creating a third-party relationship to address a risk (e.g., contracting with an identity provider for authentication services) can increase risk for an organization if the third-party fails to or inadequately performs the functions it was contracted for, mismanages data, or does not comply with relevant laws and regulations.
Since every organization’s data-processing operation is unique, no two DPIAs will look exactly alike. Thus, specific measures that should be taken to mitigate risks identified in a DPIA will depend not only on the threats identified, but also on the data protection program and nature of the organization itself. Specific security and privacy controls include access control, awareness and training, and having a contingency plan in case an adverse event were to occur. Encrypting data, keeping personal data to a minimum, and de-identifying data are also examples of concrete actions a data controller or processor can take to mitigate risks. Communicating and sharing information about the results of risk analysis both with decision-makers and appropriate personnel are also important steps that can serve to mitigate an organization’s exposure to risk.
As Recital 84 states, if a data controller cannot mitigate the risks of processing using appropriate measures, it must consult with the supervisory authority before proceeding with the processing operation. To assist in this, it is important to develop a working relationship and a channel of communication with your DPA. This process will be the topic of a future post in this series.
As between data controllers and processors, the ultimate responsibility to ensure DPIAs are carried out properly falls on the data controller. As the Article 29 Working Party explains, however, while the controller is ultimately accountable, “the processor should assist the controller in carrying out the DPIA and provide any necessary information.” Moreover, for the purposes of a DPIA, Article 35 states that compliance with approved codes of conduct by data processors “shall be taken into due account in assessing the impact of the processing operations performed by such … processors.”
In sum, to be effective and GDPR-compliant, companies should bake DPIAs into a process of continuous assessment. Post GDPR, controllers will have to continuously and rigorously assess the risks created by their processing activities in order to identify when a type of processing is “likely to result in a high risk to the rights and freedoms of natural persons.”
Data protection by design and by default
In addition to the requirements on data controllers to conduct DPIAs, the GDPR also legally obliges data controllers to implement “data protection by design and by default.” Article 25 lays out these obligations, defining it as “appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimization, … and to integrate the necessary safeguards into the processing.” The measures should adhere to the principle of purpose limitation by ensuring that only the personal data necessary for each specific purpose of the processing are processed “by default.” The amount of data collected, the extent of the data processing operations, and the period of storage and accessibility are fundamental considerations in data protection by default.
Although the terms “data protection by design” and “data protection by default” overlap and are often used interchangeably, there are differences between them. Ruth Boardman explains what data protection by default is by using social media settings as an example. As Facebook users know, a post can be shared with all of one’s friends (and the friends of anyone tagged), a specific group of friends, or made publicly available for all people on and off Facebook to see. Data protection by default would mean that the system is preconfigured to the most privacy-friendly setting (e.g., posts would be shared with specific friends only) unless the data subject voluntarily or actively chooses to change it. Essentially, data protection by default builds a data processing operation so that “[n]o action is required on the part of the individual to protect their privacy – it is built into the system, by default.”
A common approach in practice is that, just as data processing risks assessments and DPIAs flow from processes such as data inventory and data mapping, data protection by design builds upon the results of such risk assessments and DPIAs. Indeed, data protection by design should be “hinged on the privacy risks,” especially those that DPIAs help to identify.
At the same time, however, data protection by design is not about simply applying patches or fixes to the privacy risks that have been identified. As Jason Cronk puts it in an IAPP white paper, “The bare essence of privacy by design is that the original design should consider privacy, not be a hodgepodge of miscellaneous solutions ‘bolted on.’” Data protection by design is better conceived as a holistic or strategic approach to design, rather a laundry list of operational responses to privacy threats, and it requires promoting privacy and data protection “from the start.” Indeed, instead of trying to implement every control imaginable, organizations should create data protection by design programs “as a fit-for-purpose implementation.”
In the end, doing privacy by design well requires knowledge of both the business and the privacy program. The current program’s maturity, the industry, what types of personal information the organization is collecting, and their risk levels should be taken into account. Moreover, data protection by design should consider not only the privacy environment (including legislative, regulatory, industry, and jurisdictional privacy requirements), but also the organization’s culture. For large organizations, keeping internal audit in the loop is also essential. They will often have the information, or be able to do the fact-finding, to help privacy professionals leverage the available resources.
Conclusion
Conducting DPIAs and embedding data protection by design and data protection by default into data management programs are not standardized, one-size-fits-all processes. A privacy professional must decide what is most appropriate for their organization, given its size, needs, and the industry in which it operates.
While the requirements around DPIAs and data protection by design and by default imposed on businesses by the GDPR can appear daunting, companies can reduce the regulatory burden by allocating operational responsibility among various teams within the company, including product, engineering, technology, customer service, HR, business development, and more. These are not one-time practices, and companies must engage with data protection at the early stages of creating new processes, products, enhancements, and solutions.