The news is filled with stories nearly every day of things going awry in technical systems: security, privacy, abuse, ethics and more. Yet one of the most important distinctions — the difference between a vulnerability and an incident — is often overlooked.
- A vulnerability is an issue with a system in which an adversary could potentially gain unauthorized access to data or systems or otherwise make those systems act in a way that is not respectful of users.
- An incident is when someone has taken advantage of a vulnerability, whether purposefully or not.
In short, a vulnerability holds the potential for harm; an incident is where harm has occurred.
In some cases, the system operators won’t know if there was an incident when they find a vulnerability. There might not have been enough logging, or that logging might not have been secure enough to prevent an attacker from blocking or removing it.
Ironically, an increasingly common issue is deletion of logs for privacy purposes. If the older logs have been deleted, we often lose the ability to distinguish an incident from a vulnerability. Even when there is no malicious activity in the new logs, there could have been in the old logs. That scenario isn’t as common as continuing exploitation, but without full logs, the investigator can’t be entirely sure. While careful design of the logs can help, data minimization and thorough incident investigation directly trade off against each other. Some organizations concerned with data minimization have been limiting the retention of their logs for years, but the EU General Data Protection Regulation has forced many more to grapple with this tension for the first time.
You might ask why, given all this uncertainty, the distinction is so important. It is because I want to avoid incidents. Ideally, we would avoid vulnerabilities entirely, but sadly, we don’t live in a world where we can prevent all vulnerabilities from taking place ... yet. So we need to encourage three things:
- Learn how to build more robust and respectful systems.
- Write systems with fewer vulnerabilities.
- Find and fix vulnerabilities before they become incidents.
If we treat vulnerabilities like incidents with reporting requirements, organizations inclined to cut corners will be encouraged not to look for vulnerabilities.
Finding vulnerabilities would be far more costly in both reputation and money than the current methods of paying for a red team or running a bug bounty program and then fixing the resulting issues. High-profile targets are not the ones I worry about here; they know that they will always have a small army of people looking for issues, so they would be stupid not to look hard first.
Organizations that will differentiate themselves by excellent security and privacy protection must also understand that they cannot have that without the assurance provided by vigorous red-teaming. There are always some folks, sadly, who will follow the “no-news-is-good-news” mantra and simply stop looking carefully for vulnerabilities.
Finding those vulnerabilities is valuable beyond fixing those particular issues. The patterns of vulnerabilities and the factors that caused them are important to share openly so we can learn how to build more robust systems and processes to avoid them in the future. Ideally, the privacy community, especially the privacy engineering community, would speak more openly about both incidents and vulnerabilities. The information security community has different norms about this, ones that we would do well to carve out legal cover to let us emulate.