TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | Benchmarking incidents involving regulated data as the GDPR looms Related reading: Companies face US class-action lawsuits after data breaches

rss_feed

""

""

This article is part of an ongoing series on privacy program metrics and benchmarking for incident response management, brought to you by RADAR, Inc., a provider of purpose-built decision support software designed to guide users through a consistent, defensible process for incident management and risk assessment. Find earlier installments of this series here.

The information we collect as a matter of doing business is growing. Think of the advances in technology and the proliferation of devices that constantly capture and store information — the internet of things, smart home technology, wearable technology and medical devices. As the volume of information about individuals becomes more available, so too grows our challenge to accurately catalog and safeguard the data — and subsequently regulate the use of this information. They say that knowledge is power, and these days there is no greater power for companies than data.

In the privacy profession, we are of course laser-focused on personal information, the security and protection of that information, and the ways in which that information is regulated. In that light, for this month’s installment of the benchmarking series, we decided to look more closely at regulated data and, in particular, examine any patterns that may emerge in privacy incidents and incidents that may require notification under breach notification regulations.

What constitutes personal data?

The data segment we examined for this article represents regulated entities subject to breach notification laws and regulations in the United States. This is important to note because what constitutes regulated data in the U.S. may differ from other regional regulations (more on that to come). In fact, what is considered regulated data within the U.S. may differ widely from state to state. For example, all U.S. breach notification laws regulate electronic personal information, but only a handful of state laws, insurance regulations, and federal laws such as HIPAA also regulate non-electronic personal information. When data breach notification laws were first enacted in the states, personal information was typically minimally defined as an individual’s name in combination with a Social Security number, driver’s license or state identification card number, or a financial account number combined with an access code or password. Changes to state and federal legislation in subsequent years have shown a trend toward significantly expanding the scope of personal information to include a wider set of data, such as taxpayer identification number, health care data and biometric information, or the answers to security questions that would permit access to an online account.

These laws are rapidly changing — for instance, on Jan. 1 of this year, Maryland enacted a revision to their Personal Information Protection Act to specifically include health information, insurance policy or certificate number, and biometric data in their definition of personal information. And at least four states proposed regulations further expanding the scope of personal information in January of 2018 alone.

Benchmarking personal data elements

That brings us to looking at specific types of personal data and that type of data’s prevalence in incidents and notifiable incidents. Unsurprisingly, one of the most common types of information reported in RADAR, far and beyond any other type of personal information, was Name – which appeared in 91 percent of all incidents. This makes sense, as most regulations in the U.S. consider a breach of personal information to occur when name in combination with other points of data are disclosed. Closely aligning with a statistic from last month’s benchmarking article, incidents that included an affected individual’s name, once properly risk mitigated and assessed, were considered notifiable 19 percent of the time.

Also in the incident metadata, we began to see a trend emerge that information considered to be particularly sensitive was exposed less frequently — but when it was, it was considered notifiable at a greater rate. Three examples:

  • Social Security number appeared in only 12 percent of incidents, but 34 percent of those incidents were considered notifiable.
  • Clinical data or diagnosis — appeared in only 13 percent of incidents, but 34 percent of those incidents were considered notifiable.
  • Mother’s maiden name — appeared in less than 1 percent of incidents, but 57 percent of those incidents were considered notifiable.

That last figure, concerning mother’s maiden name, is an interesting one because we’ve recently seen more jurisdictions regulate the type of information that may permit access to an individual’s account, such as the answer to security questions for knowledge-based authentication (what we call KBA). 

The types of data involved in assessing potential risk of harm to affected individuals in the United States also points out the way some U.S. regulations consider harm to an individual. Under U.S. regulations, it is typical to find consideration of harm centered around potential financial risks associated with identity theft. For example, account numbers including bank, credit and debit card numbers are exposed in 6 percent of all incidents assessed in RADAR, but incidents including that information are seen as notifiable 57 percent of the time.

Challenges for privacy professionals: Regulated data under GDPR and a broader scope of personal data 

Compliance with the GDPR breach notification rule is top of mind as the May 25 effective date looms. We often hear the challenges inherent in the regulation repeated — only 72 hours to provide notification to supervisory authorities — as well as the potential fines you run the risk of incurring with noncompliance. In addition to these very real risks, the GDPR additionally poses a hurdle for U.S. privacy professionals in its definition of personal information, and in how that definition differs from U.S. regulations. 

Under GDPR, the term for regulated data is actually "personal data" (rather than "personally identifiable information" or "protected health information," as is more common in U.S. regulations). This broad term is representative of the comparably broad definition under the GDPR, in which personal data is considered “any information relating to an identified or identifiable natural person.” 

The information regulated also reflects a key difference in how GDPR considers risk of harm to an individual. While U.S. considerations of personal information might focus on potential financial or health risks associated with identity theft, GDPR expands the consideration of personal information to include nonmaterial risk, including personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data, health data, and sex life or sexual orientation. Additionally, the GDPR regulates all forms of personal data, electronic and non-electronic. 

Given these key differences, it’s no wonder the GDPR is considered by many to be a sea change in international privacy regulations. By regulating more types of information — and considering privacy to be more than just freedom from identity theft or harm, privacy professionals responsible for compliance with U.S. regulations will need to adjust their way of documenting incidents, and prepare to capture even more information when investigating, assessing, and quantifying risk and high risk for incidents.

One challenge we may see is an increase in the number of incidents requiring assessment, based on the much broader scope of personal data. Another challenge might be uncertainty around where an incident qualifies as a breach, based on the particular definition of personal data. Here’s a place to start to meet these challenges head-on:

  • Know what data your organization collects and for what purpose, limiting it to only that which is necessary for legitimate business purposes. Assign sensitivity to the collected data because it will be necessary when conducting incident risk assessment under GDPR.
  • Adopt a process or system to streamline incident intake so when an unauthorized disclosure of personal data occurs, you are able to quickly assess the risk of what has been allegedly disclosed or made unavailable, and ensure that your investigation and assessment are fully documented for regulators should you get investigated or audited.
  • Implement reporting and benchmarking to identify trends in unauthorized disclosures so you can implement measures to mitigate the risks.

About the data used in this series: Information extracted from RADAR for purposes of statistical analysis is aggregated metadata that is not identifiable to any customer or data subject. RADAR ensures that the incident metadata we analyze is in compliance with the RADAR privacy statement, terms of use, and customer agreements.

Comments

If you want to comment on this post, you need to login.