TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | Assessing risk: Determining the appropriate risk flags for your privacy risk assessments Related reading: The increasing importance of a DPIA


The privacy technology market has been flooded with tools over the past few years — you need only look at the size of the IAPP Tech Vendor report to see it. And while these tools can massively accelerate and support privacy programs, they aren't a silver bullet. All privacy technology requires configuration to meet the specific needs of the business, and that involves expertise — in other words, a privacy professional making informed decisions about how best to implement and operate it within your organization.

One segment of privacy-enabling software automates the privacy and/or data protection impact assessment process. Often, these tools come pre-populated with questions, and have functionality so that certain responses can trigger actions like email notifications or contribute to a calculation of risk. Making this functionality useful takes a knowledgeable privacy pro tweaking knobs and levers to adjust the functionality to how you want your program to operate.

Properly identifying processing activities within your PIA/DPIA automation tools as high, medium or low risk allows you to prioritize your privacy program activities and ensure your company is processing personal information in line with its risk profile. How you configure those risk flags depends on several factors, including the kind of business, its risk profile, ethics program, the visibility of the processing, your specific regulatory regime and more. As no privacy program has enough resources to look at everything, assigning risk flags to activities allows you to focus your limited resources where they can be most effective in managing risk. Below are some important considerations when setting PIA risk flags.

Regulatory risk: Does the processing violate privacy and data protection laws

Determining regulatory risk involves understanding the privacy laws to which you are subject and the level of enforcement of those laws in the areas you operate. Identify where you do business, what types of personal data you process and how you process it; this information will help determine what laws apply to you. Another factor to consider is whether you operate in a regulated industry. These industries may be subject to additional privacy laws, and regulators may be more active as well.

Once you understand this, you can use this to inform your risk flags. While on its face, this may seem like a cut and dried decision (no violation = OK; violation = not OK), the reality is that for many companies' full compliance with every aspect of every requirement is beyond their available resources. Additionally, interpretation of how to meet requirements may vary widely even within your organization and coming to a consensus around how much is enough is often a big part of your process.

Visibility: How easy is it for others (consumers, regulators) to see potential violations

Plain and simple, where the proposed processing activity is more visible to consumers and regulators, it presents more risk. If, for example, you send out a marketing email and you haven't included the appropriate notifications and opt-out, you are likely to see a complaint from a consumer — or worse, a communication from a regulator about a consumer complaint. Because of this, you will want to ensure that you put the appropriate controls in place prior to the processing. Consider higher-risk flags for outward-facing processing activities like data subject communications and notices. 

Sensitivity and scope: Type and amount of personal information and data subjects impacted

Often, understanding risk related to data types can be informed by looking at the definition of sensitive personal information in privacy and data protection laws. But not always; for example, government identifiers are not included in many legal definitions of sensitive personal information, but their inclusion in a processing activity should likely raise a risk flag in a PIA due to their use in facilitating identity theft. Additionally, combining certain data types can increase the risk factor for businesses, as data elements that are individually benign can become more sensitive when combined. 

Further, the categories of data subjects affected and the context of the processing may increase the risk of a processing activity. Processing the personal data of children ups the ante, as does processing based on categorical groupings that may make data subjects feel vulnerable — say, people who've purchased a cane or walker. User expectation and potential impact should be considered when determining risk.

Downstream effects: Key controls that may lead to compliance violations downstream

Effective PIAs should help you understand the long-term implications of any processing activity. This means that when setting risk flags, you need a good overall understanding of the data you have as well as how that data is used. This understanding will help you identify some key controls to ensure that today's processing activity doesn't become next quarter's compliance headache. These key controls demand a higher risk rating, as, without them, you may not even be looking in the right place to understand what your true risks are.

For instance, where the processing involves sharing personal data with a third party, you should be conducting a privacy and security review on that third party. Absent this review, you're going in blind and can't properly evaluate the risks going forward. This is an example of a key control that demands a high-risk flag.

Company risk profile and values: Knowing how much and what kinds of risk are OK

Privacy and data protection obligations live within a broader business ecosystem of competing priorities and goals. A privacy program must balance supporting business objectives with protecting the privacy rights of data subjects. Where business priorities and data protections clash, companies need to make decisions based on their unique risk profile. This calculation depends on a company's core values, code of ethics, emphasis on privacy and risk appetite.

For instance, a utility company that is a federal contractor, may categorize answers to access control and data retention questions in a PIA as high risk while an office supplier company may categorize them as low risk. Understanding where your company lies on this spectrum and what types of risk are acceptable will help you set appropriate flags.

The benefits of getting it right

The ultimate goal of PIAs and DPIAs is to identify privacy risk so that you can manage it to acceptable levels prior to going live with any processing activity. The above factors — and likely others specific to your company — will help you do that. However, many companies aren't able to mitigate all risk to data subjects prior to using personal information to support the business. Getting your risk flags right can benefit your program in many ways.

Prioritizing your program's activities. Fundamentally, risk flags are a prioritization support tool. Most privacy programs have more work than they can handle. If 90% of your PIAs come back as high risk, and you only have resources to address 30%, risk flags will be essential in helping you know what issues to tackle first.

Getting leadership buy-in. PIAs provide visibility into data-related risks and one of the most valuable ways to use them can be bubbling them up to leadership so they understand and make the call on residual risks.

Getting resources. If your PIAs are consistently coming back with more risk than your privacy team is equipped to manage, it can be a powerful message to leadership that you need more resources.

Awareness. Understanding where your main privacy risks are coming from can help you identify teams with which privacy needs to engage more consistently and where you may want to cultivate a privacy champion.

Photo by Loic Leray on Unsplash

Credits: 1

Submit for CPEs

1 Comment

If you want to comment on this post, you need to login.

  • comment R. Jason Cronk • May 6, 2022
    This post is broadly accurate but I want to point out two thoughts, that I think are critical when thinking about privacy risk. First, privacy risk should be inherently viewed as a risk to individuals. This is the approach of the NIST Privacy Framework and the GDPR. Let's play the old SAT analogy game: Data is to Oil as ______ is to pollution. The answer, of course, is privacy. Like pollution, privacy is an externality that isn't fully internalized by organizations. They act in a way that benefits them, while causing harms externally. Harms that may result in lawsuit, regulatory fines, etc. but disproportionately less than the benefits yielded. 
    The second point I want to make is the call-out on visibility. Yes, the authors are correct, a more visible violation will be more likely to yield regulatory scrutiny. Risk is a product of likelihood and impact, so reduce the likelihood of a fine and risk is reduced. But what's the  solution? Stop the violation or make the violation less visible?  Unfortunately, too many companies choose the latter, not ceasing the behavior but obfuscating it. Spamming illegal?  Pay third party promotion companies and turn a blind eye to their marketing efforts.  Not allowed to sell data? Lease it, allowing third parties to use the data without having the data, essentially obfuscating the transaction behind the business model. 
    The relationship between my two call-outs is that by putting people first in your risk consideration, you'll be focused on the likelihood of violating their privacy not on the likelihood that you'll get caught doing so.