WP250, out for consultation until Nov. 28, 2017, contains guidance from the Article 29 Working Party to flesh out GDPR breach notification requirements to supervisory authorities and data subjects. It builds on WP29’s 2014 ePrivacy Directive breach notification guidance, applicable essentially to telcos (WP213).
In practice, I believe it will lead to a flood of “just in case” notifications to regulators.
Issue 1 – Are availability breaches PDBs?
The GDPR’s definition of personal data breaches is critical. Only PDBs are notifiable under Arts.33 and 34. Breaches of the GDPR that are not PDBs are not so notifiable.
The wording is almost identical to that in Art.17, Data Protection Directive, and similar to that in Art.5(1)(f) GDPR.
The definition suggests two conditions, both of which must be satisfied for there to be a PDB: (1) a security breach, (2) that leads to accidental or unlawful destruction etc. of personal data. Accordingly, WP250 correctly notes, “whilst all personal data breaches [including availability breaches] are security incidents [Art.32 security breaches], not all security incidents are necessarily personal data breaches”.
However, arguably only integrity or confidentiality breaches satisfy the definition’s condition (2). Accidental or unlawful “loss of access” (availability breach) is not the same as accidental or unlawful “access” (confidentiality breach) – note the commas’ positioning. I believe other language versions would confirm that the definition targets confidentiality or integrity – not availability.
Also, Art.5(1)(f) uses similar language but is described as “integrity and confidentiality” – without mentioning availability.
All this suggests that data protection laws are more concerned with confidentiality and integrity than availability. Arguably, the main data protection implication of losing availability is that the controller can’t access personal data to satisfy individuals’ requests for access, rectification, portability, etc. This may only be temporary; if personal data remain unavailable for over 30 days (the usual GDPR Art.12 response deadline), its business may have broader problems! If availability loss is permanent, that’s a notifiable integrity breach.
Art.34(3)(a) confirms this view. Individuals needn’t be notified if the controller has applied appropriate measures to the affected data, particularly to render the data unintelligible to unauthorized persons. This squarely targets confidentiality – not availability.
Furthermore, surely the key concern is that breached personal data could be used against individuals’ interests: Individuals must be told of confidentiality breaches so they can take steps to protect themselves against identity theft/fraud – such as by resetting their passwords, or cancelling cards.
However, with availability breaches, generally individuals can’t take any damage-mitigating action. It’s for the affected organization to, e.g., restore network availability. Telling individuals about availability breaches they can’t do anything about can only increase their sense of helplessness and distress (or “non-material damage”, in Art.82 GDPR terms). The key policy purpose of notifying availability breaches to individuals would surely be to reassure them that steps are being taken to restore availability.
Yet WP250 recommends notifying availability breaches at hospitals: “In the context of a hospital, if critical medical data about patients are unavailable, even temporarily, this could present a risk to individuals’ rights and freedoms; for example, operations may be cancelled." But isn’t the main risk here to health, rather than privacy? To paraphrase Fieldfisher Partner Renzo Marchini, hospitals would tell patients their operations are cancelled anyway, irrespective of breach notification laws; so why insist on worrying them further by making hospitals cite availability breaches too?
In summary, arguably the PDB definition’s wording doesn’t cover availability breaches. Whether availability breaches should be notified is an important policy issue. Clearly, WP29 thinks they should be. Lawmakers need to make a clearly-expressed decision on this point, based on balancing distress to individuals against the need to reassure them regarding restoring availability, and considering whether controllers should bear the double risk of fines for not notifying availability (as opposed to confidentiality and integrity) breaches as well for the Art. 32 availability breaches themselves.
Issue 2 – Awareness of PDBs
Controllers must notify SAs, and processors their controllers, “without undue delay” after becoming “aware” of a PDB.
Under WP250, a controller is considered “aware” upon having “a reasonable degree of certainty that a security incident has occurred that has led to personal data being compromised," depending on the circumstances. This includes discovering loss of an unencrypted CD, or receiving clear evidence of breach from another.
After being told of a potential breach, a controller may undertake “a short period of investigation” to establish if there really was a breach, during which it’s not considered “aware.” However, WP29 expects the investigation to start ASAP and establish “with a reasonable degree of certainty” (1) whether there was an actual breach, and (2) possible consequences for individuals. More detailed investigation can follow.
What’s the practical problem with this approach? It assumes that controllers can, after short investigation, ascertain whether an intruder had accessed certain personal data.
Certainly, organizations should include implement appropriate logging of accesses, etc. But, in real life, sometimes they just can’t tell, even to “a reasonable degree of certainty”, whether network intruders “touched” certain data or not. Maybe logs weren’t maintained, or intruders deleted them, or “first responders” wiped logs when trying to contain the breach.
Good security and incident response policies and processes should help to avoid these situations by ensuring logging and audit trails and proper post-incident preservation of evidence of accesses, which may also help to identify the hackers.
But if they weren’t in place when an incident occurred, what can organisations do? I foresee a flood of notifications “just in case”, to avoid the “double whammy” of fines for non-notification. Will regulators have enough staff to deal with these?
Issue 3 – Awareness of PDBs at processors
Controllers are responsible for PDBs at their processors. That’s not new.
But WP250 says, “The controller uses the processor to achieve its purposes; therefore, in principle, the controller should be considered as “aware” once the processor has become aware.” Accordingly, it recommends that processors make “immediate” notification to all affected controllers, to help them notify SAs within 72 hours, with phased information thereafter.
However, GDPR Art.33(2) says “without undue delay," not “immediate.” The 72-hour deadline is “where feasible,” following lawmakers’ discussions; that's not an absolute deadline. Interpreting “without undue delay” as “immediate” here doesn’t seem very practicable, or accurate. It also contradicts WP250’s interpretation of “without undue delay” as “as soon as possible,” in the context of notifying individuals.
Also, although it seems implicit, ideally WP250 should state explicitly that processors are also permitted an investigatory period before they are considered “aware.”
Issue 3 – “Shop the processor!”
Interestingly, “a controller “may find it useful to name its processor if it is at the root cause of a breach, particularly if this has led to an incident affecting the personal data records of many other controllers that use the same processor."
Even without this statement, processors will be concerned about “shop the processor syndrome.” If it turns out that a controller blamed its processor wrongly or unfairly - the breach actually originated at the controller, the processor still has to spend time and money dealing with the consequences – regulatory enquiries and investigations, reputational damage if the processor was “shopped” to individuals too.
So, while contracts should cover how processors must notify controllers, equally processors may want contractual terms protecting them against being unjustly blamed for a personal data breach, whether to regulators, individuals or publicly.
Issue 4 – Encrypted data
Under Art.34(3)(a), controllers need not notify individuals if appropriate measures were applied to render the affected data unintelligible to unauthorized persons.
WP250 interprets this as absolving controllers only if the measures were good enough – e.g. “securely” encrypted, including proper selection, implementation and use of encryption software, and not using outdated encryption or products.
This seems implicit and makes sense. But one practical problem is what will be considered “secure” enough? How will organizations know? Here, codes and certifications approved under Arts.40-43 GDPR may assist, but those provisions are unclear, and WP29 guidance isn’t forthcoming until, probably, early 2018 at the earliest. Furthermore, WP253 on GDPR fines (where regulators aren’t seeking comments) apparently considers adherence to approved codes and certifications not as evidence of compliance, but rather as indicating that, in assessing fines, regulators may take account of any sanctions for code breaches taken by a code monitoring body. This seems a very narrow approach to GDPR-approved codes and certifications.
Even with encrypted data, WP250 notes that risks may change afterwards - if a vulnerability in the encryption algorithm or software is later discovered, notification may then become necessary.
Notification upon discovery of subsequently known vulnerabilities will prove tough in practice. It’s also inconsistent with Art.34(3)(a) itself. WP29’s concerns are very understandable. There are good arguments that unauthorized access should be notified at the outset, even with encrypted personal data. However, Art.34(3)(a) embodies a different policy decision.
Finally, may I digress on one issue.
WP250 acknowledges that if a controller does not or no longer has the key to encrypted personal data (e.g. ransomware encryption of the only copy), this represents a “loss” of the personal data. Surely, this should mean that encrypted personal data shouldn’t be considered “personal data” in the hands of someone with no access to the key? However, WP216, on anonymization, considers encrypted data to remain personal data. This inconsistency seems to be a case of “have regulatory cake and eat it.”
I hope that eventually it will be resolved sensibly.
Given regulators’ approach in WP250, it’s likely that well-advised organizations will be flooding regulators with notifications within 72 hours of hearing about any incident – on the basis of, “if in doubt, shout it out!”
That seems the most pragmatic approach for organisations, because they could be fined for not notifying when they should have, whereas if they notify when they shouldn’t have, they can give further information later in phases, and they always “take back” a notification without penalty if an incident turns out not to have involved a security breach.
Is regulators’ strict approach going to be counter-productive? How will they cope with the impending flood of notifications? Will they have enough resources in terms of staff numbers and expertise? How will they be able to differentiate between serious breaches that they really need to focus on, and trivial breaches that will inevitably be notified “just in case”?
After May 2018, no doubt we will find out…
If you want to comment on this post, you need to login.