Any good privacy professional will tell you that managing privacy is all about managing risk. The world, and more particularly, our organizations are full of things that threaten our privacy. We cannot deal with all of them! So we deal with the biggies. We deal with the ones that are most likely to threaten our privacy and those that will have the biggest impact.
The process for identifying, defining and recommending steps to mitigate the risks in a privacy impact assessment is pretty tried and true. We assess a program that involves personal information. We identify the safeguards it has in place. We identify any gaps and the associated risks. We determine the impact and likelihood of the risk. We make some recommendations on how to lower the risks. Then we head home and watch an episode of Big Brother.
I firmly agree and support this general approach. But I question whether we have oversold the notion that we can accurately quantify risk and similarly whether we then need to re-tune how we make recommendations.
Let me use an example to better highlight the difficulties of quantifying risk. Organizations commonly lack appropriately defined information-handling procedures that limit how information can be used. The risk is that information will be used in a way that is not authorized by consent or law. The impact could be non-compliance with legislation, violation of a person’s privacy rights as well as legal or financial if a lawsuit arises.
What’s the likelihood of the risk that the information will be used in a way not authorized by consent or law? That’s difficult to say because it might depend on the size of the organization, how often the business process is executed, how many people have access to the information and so on. There are so many factors that go into whether the risk will occur that it would be disingenuous for the average privacy professional (including me) to say that they can judge the likelihood with any certainty.
But even if we can accurately determine the likelihood of the risk, we still cannot be certain that the impact will come to pass. In the above example, the legal and financial impact will only occur if the risk comes to pass and the person decides to sue. So now, the impact is not only affected by the risk, but also other factors begin to contribute. Additionally, impacts that we typically determine to be high (e.g., legal non-compliance) may not be all that significant. Failing to destroy information according to the retention cycle versus selling information to another organization without consent are two very different examples of non-compliance.
Don’t get me wrong. We need to be able to determine whether a risk is bigger or smaller than a breadbox. We do need some sense of how much to worry about a particular risk.
However, we also need to ensure that we are clearly and constantly communicating that in many instances we simply are unable to reliably quantify the risk and that the best we can say is that it's bigger than a breadbox. It isn’t a professional deficiency. It’s because the average privacy professional likely doesn’t have the capacity or time to quantify risk more accurately.
We clearly need to be able to identify privacy risks and generally determine how big or small the risk may be, but I feel we have overstated the accuracy with which we quantify risks.
Also, it does not mean that all recommendations to mitigate risks are equal. Some are much higher priority than others, but we need to recognize that it may not be a one-to-one relationship between risk and recommendation and that the recommendations may be intended to reduce an organization’s overall risk profile rather than any one particular risk.
I am always interested in speaking with other privacy professionals to get their opinions on risk management, but I generally consider the following when prioritizing recommendations:
Privacy rights—We are privacy professionals. We need to protect the organization from legal, financial, reputational or other risks, but we mustn’t forget that we exist to protect the data subjects. So recommendations related to protect individual privacy rights is paramount.
Foundational requirements—If an organization or program is missing foundational privacy elements, I recommend they start there because strong privacy protections really depend on those foundational elements. One has to be careful though to understand that what is foundational may differ among organizations. For example, a large company with hundreds or thousands of employees should absolutely have well-documented policies in place. A small start-up of just three or four people might be able to get by with some key principles documented and communicated to the staff.
Hot issues—For better or worse, there are often topics of the day that receive a whole lot of press! They may not even be all that privacy-protective. But do you really want your organization to end up on the cover of the local newspaper defending your decision to not implement that safeguard?
Privacy awareness—I have worked with many organizations in assessing or developing their privacy programs. In my experience, the difference between a successful one and an unsuccessful one is whether there is a privacy culture and knowledge created through awareness. Privacy is not protected by some amorphous entity or concept. It is protected by people. So it’s important to make sure those people are aware of privacy, attuned to it and consider it when handling information. They may forget the procedures, the safeguards that they are expected to follow and everything else they learned in training, but if there is a culture of privacy, they are more likely to make a good decision.
As a privacy profession, we need to shine some light on how we quantify and communicate privacy risks and specifically how we prioritize recommendations to mitigate risks identified in privacy assessments. We clearly need to be able to identify privacy risks and generally determine how big or small the risk may be, but I feel we have overstated the accuracy with which we quantify risks.
Consequently, we need to find other criteria by which we prioritize recommendations to mitigate risks. The above criteria is a good—though not perfect—method for prioritizing recommendations, but we need to start somewhere. I would be interested in hearing from other privacy professionals on what they think about quantifying privacy risks and how this impacts the recommendations they make.