How familiar are you with the “encryption exception” in the EU General Data Protection Regulation and some U.S. state laws?

It can be used to exempt a company from breach reporting and notification obligations if data was encrypted and the key had not also been compromised. The GDPR identifies the exception in Article 34(3)(a), stating that the breach notification of Article 34(1) is not required if “the controller has implemented appropriate technical and organisational protection measures, … such as encryption….” 

The reasoning is that encryption preserves confidentiality – even for stolen data – by rendering it unreadable. Thus, there is no reasonable likelihood of harm to the data subjects and the theft incident does not actually meet the legal definition of a data breach.

Nice … if it’s really true.

The issue to consider is not binary: Was the data encrypted or not? Rather, a proper analysis should reflect the reality that encrypted data is vulnerable to compromise in a number of different ways. I dive deeper into this analysis in a new white paper for the IAPP. A short preview is provided herein.

There exists multiple vulnerability scenarios in which encryption does not preserve confidentiality of stolen data. The white paper describes, in part, a hacking tool that was used to quickly defeat an application of AES-256 encryption, which is widely considered to be highly secure, supposedly requiring billions of years to crack.

Unfortunately, many widespread data security practices do not manage decryption keys sufficiently enough to justify an assumption that stolen encrypted data cannot be fully exposed. If whoever is handling the incident response does not thoroughly investigate the encryption system and key management practices that had been implemented, there is no way to properly ascertain whether an encryption exception is being correctly applied or misused to avoid reporting an actual breach.

How many DPOs know how to properly analyze the risk of compromise for stolen encrypted data? Does yours? It may be best to inquire before hiring or contracting one, to make sure you are getting one that can advise you properly.

There are three common encryption scenarios to consider:

  • Scenario #1: Encrypted data is accessible by a processor and at least one software application that operates on the data in cleartext form (unencrypted or decrypted state). Whenever the software accesses the data, it automatically decrypts the data so it can be read or edited; the decryption key must therefore be available to the software application. 
  • Scenario #2: Encrypted data is accessible by a processor and at least one software application that operates on the data in cleartext form, similar to scenario #1. However, rather than the key being accessed automatically by the software, the user must insert some hardware media (such as a USB device) that contains the decryption key or else must type a password that is then hashed by the software to produce decryption key material. 
  • Scenario #3: Least common among these three options is an encrypted archive back-up, in which data is stored off-line and cannot be accessed by any software for reading, editing, or any other use. The data never appears on the system in cleartext form, and the key is also never placed on that system.

Of these scenarios, #1 is the most vulnerable to data compromise from a short-duration intrusion. If an intrusion is persistent, which unfortunately occurs quite often, scenario #2 also becomes vulnerable. Even scenario #3 is vulnerable for situations in which there are multiple intrusions even if the encrypted data and decryption keys are stored separately. All scenarios are vulnerable when weak encryption (below 256-bit) or weak passwords (under 15 characters) are used. And there are some advanced hacking techniques that introduce other compromise possibilities that will be explained further in the white paper.

Another question to ponder: That IRP, for which you paid so dearly, does it include recommendations to (a) abandon and delete all copies of decryption keys for the data that had been stolen, and (b) compare the data sets stolen during different incidents, to ascertain whether encrypted data and the decryption key were each stolen at different times?

Why are these important? Consider a scenario of multiple intrusion and data theft incidents. In a first incident, only encrypted data was stolen. The key remained secure, and the encryption exception was properly relied upon. However, a copy of the decryption key for the stolen data was retained by the owner for a “just in case” situation. Sometimes people are reluctant to entirely delete certain things; it’s human nature.

But then, there was a second incident, handled by different response team. The second response team correctly ascertained that no personally identifiable information was stolen and so concluded that there was no breach.

Can you spot both of the problems? After the first incident, the data should have been encrypted with a different key and all copies of the earlier decryption key for the stolen data should have been destroyed as soon as possible. That would have precluded a risk of that key being stolen. Also, if that decryption key had been stolen during the second incident, then the combination of both incidents did actually result in a breach. But it went undetected and unreported, because the second team had not been directed to identify any possible relationship among the different incidents.

Perhaps it’s time to take another look at your IRP. If it’s missing instructions to analyze key management practices for vulnerabilities that could lead to compromise of stolen encrypted data, to immediately change keys and purge all copies of the prior ones, or to correlate stolen data sets among different incidents, then it might be missing other important recommendations, as well.