Managing privacy requires managing privacy risk. This is true both under the EU General Data Protection Regulation and under the California Consumer Privacy Act.
The idea of risk-based approach is so much rooted into the GDPR that there are hardly any requirements which can be satisfied without managing the privacy risk. But the same is not so obvious when reading the CCPA, as it lacks clear references to the privacy risk. However, after the idea behind the law — and some of the practical implications — are considered, again there is no other way for compliance than to effectively manage privacy risk.
In order to deal with practicalities and everyday issues effectively, there must first be sufficient clarity on how to define risk and other important terms, what is the relationship between privacy and enterprise risk and how they are managed, and, last but not least, what governance model is implemented and what are the metrics used.
Simply ignoring the privacy risk or failing to assess and take such risks into consideration will most certainly lead to non-compliance issues.
How to define privacy risk?
The U.S. National Institute of Standards and Technology Privacy Framework defines privacy risk by reference to the potential problems individuals could experience. This includes both dignity-type effects, such as embarrassment or stigmas, as well as more tangible harms such as discrimination, economic loss or physical harm.
The possibility to receive unwanted emails or calls should also be perceived as risk, though obviously not as serious as those mentioned before.
The WP29 in its Guidelines on Data Protection Impact Assessment, highlights that a risk involves an event and its consequences, estimated in terms of severity and likelihood. Risk management would be defined as the coordinated activities to direct and control an organization with regard to risk. Guidelines also describe some processing activities likely resulting in high risks to the rights and freedoms of data subjects. Guidelines on personal data breach notification elaborate further on assessing risk to individuals in the context of confidentiality, integrity and availability breaches. Obviously assessing the risk to people’s rights and freedoms as a result of a breach has a different focus to the risk considered when conducting data protection impact assessments, as they need to include both the risks of the data processing being carried out as planned and the risks in case of a breach.
When discussing the privacy risk, it's impossible not to mention NISTIR 8062, which focuses on privacy engineering and risk management. The document introduces privacy engineering objectives of predictability, manageability or disassociability, which are fundamental to understanding the privacy risk. These objectives should be considered and addressed in addition to confidentiality, integrity and availability of personal data, which are traditionally in the center of discussing privacy risks and mitigation techniques.
Predictability is generally about enabling reliable assumptions about personal information and its processing. Manageability is about providing the capability for granular administration of personal information, including alteration, deletion and selective disclosure. Disassociability is about enabling the processing of personal information or events without association to individuals or devices beyond the operational requirements. Even though the document focuses on information systems, this would be very much relevant for business processes as well. Regardless, "information system" by itself is understood very broadly and should not be mistaken with definitions focused on IT software and assets only.
All in all, the privacy risk can be defined as the possibility of an unwanted or unexpected consequence from the perspective of the individual, causing any level of harm or nuisance to her, resulting from the loss of either confidentiality, integrity or availability (information security issues) of her personal data or from insufficient level of predictability, manageability or disassociability (failure to meet privacy engineering objectives) in relation to her personal data. Such risk needs to be measured for likelihood and severity.
There are also specific risk factors, such as sensitivity of data, vulnerability of data subjects, means of processing and many more.
In this context, insufficient control over the data would be a source of risk on its own.
It is still important to understand the relationship between privacy risk and organizational risk, which is explained in the Privacy Framework.
As a result of the likelihood of problems arising from the data processing, which individuals may experience, organizations may be impacted by non-compliance costs, revenue loss arising from customer abandonment of products and services, or harm to its external brand reputation or internal culture. Organizations commonly manage these types of impacts at the enterprise risk-management level. Connecting problems that individuals experience to organizational impacts enables informed decision-making and resource allocation.
What is important to understand, though, is that risks to individuals and to the organization itself should not be confused and treated interchangeably. Considering how privacy laws are drafted, and the very rationale behind such laws, as well as consumer expectations, privacy risks to individuals would normally need to be assessed first in order to understand potential consequences for the organization when such risks would not be mitigated. Failing to do so would result by itself in organizational risk, both from non-compliance perspective as well as when considering consumer trust, brand reputation and ability for business to operate and attain its objectives.
How to manage privacy risk?
In order to manage privacy risk it first needs to be identified, and it is extremely easy to fail in doing so. There is no workaround for having trained staff educated in privacy and sensitive to rights of individuals and their expectations. Whether these are the organization's employees or contractors, they will only respect privacy of others if they can, in turn, expect the same from the company hiring, irrespective if this is specifically required under local law.
Risk identification needs to be part of the process, as well as systems design. Privacy by design and by default is all about identifying privacy risk and making sure a risk-based approach guides you through the entire life-cycle of data.
Secondly, risk needs to be assessed. Data protection impact assessments are specifically called out in this phase, however, they do not exist in a void. Before you decide to conduct such an assessment, a basic notion of likelihood and severity in the given context or kind of initial threshold assessment would be necessary. Only if the threshold is reasonably met would the full assessment be needed, otherwise the number and amount of information to be analyzed could prove to be a bottleneck for regular business operations, which would in turn result in avoiding the task or using standard answers without properly considering the real risk and reasonable business alternatives to avoid dangers to the individuals concerned.
Once the risk is identified and assessed, it has to be mitigated with appropriate privacy and security controls. Most of all, however, the very design of the process and system should be modified and adjusted to minimize the risk, and, in extreme situations, the project may need to be abandoned should the stake be simply too high.
Afterwards, the risk needs to be constantly monitored, throughout the life-cycle of the process and system and until the data are securely erased. This includes repeating previous stages and flexibly reacting to changing consumer expectations, technical and societal developments, as well as gaps or potential improvements which have been subsequently discovered and can impact the risk. The system and process design should specifically include ways to monitor the privacy risk.
All these efforts and cycles need to be documented and backed by sufficient metrics. Speaking the language of risk is increasingly expected by authorities and consumers, not to mention the business people, as it is already very much true for so many other areas except the privacy domain.
Photo by Marvin Esteve on Unsplash
If you want to comment on this post, you need to login.