TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

""

At this year’s IAPP Global Privacy Summit, I repeatedly encountered references to and quasi-explanations of the “risk-based approach” to privacy. The risk-based approach is, apparently, the new black now that accountability is no longer quite so chic. With its focus on the privacy risks incurred by individuals, the risk-based approach is, I was informed, a bold new direction for the privacy profession.

Taken at face value, it’s rather difficult to imagine a more damning indictment of the privacy profession. It’s 2014 and we’ve only just started worrying about risks to individuals?

Seriously?

Well, presumably not. Most privacy-compliance requirements at some level reflect Fair Information Practice Principles (FIPPs). So, to the extent that FIPPs relate to individual privacy risk, the compliance-risk model that holds—too often exclusive—sway for enterprises does get at individual risk. But that extent is indeed limited and becoming more so in the face of increasingly complex socio-technical systems. Still, if the idea is that we’re going to do for individual privacy risk what we’ve done for enterprise privacy risk, it’s a pretty discouraging thought based on the steady diet of enterprise privacy debacles to which we’ve been treated over the years. Hopefully, we’ll demonstrate a little more sophistication with respect to individual privacy risk.

In addition to worrying a little more systematically and cogently about the privacy risks to which individuals are being exposed, though, there’s a larger narrative here. Actually, there are two of them; the second playing off the first.

The first narrative concerns the balance between enterprise and individual—or, more prosaically, engineer and user—responsibility for addressing privacy risk. At least when it comes to informational privacy, the focus on individual control—as manifested by FIPPs, among other things—has landed individuals with a heavy load of responsibility for looking after their privacy. Individuals are expected to devote themselves to managing their privacy risk, including performing analyses based on insufficient and/or unusable information and making appropriate (for them) decisions under various other constraints. Strangely enough, this hasn’t proven an overwhelming success.

What I keep hearing in much of the rhetoric, both explicitly and implicitly, amounts to exchanging one privacy risk model for another. But risk management failures do not in and of themselves invalidate the underlying risk model. Especially when the privacy profession itself has facilitated that dysfunction—privacy notices, as distinct from transparency, being Exhibit A.

Security, in contrast, has traditionally come down on the other side of the scale. To the extent that products and systems operate securely, it’s been largely seen as the result of work by the enterprises responsible for those products and systems. Security has been, by and large, the result of myriad activities behind the scenes ensuring that needed security controls reside within the guts of products and services. However, in a world of ubiquitous, interconnected and complex computing in which a single mouse click can result in disaster, this balance has become increasingly problematic. We’ve been forced to recognize that individuals play a critical role in security and while efforts to provide enterprise supports; e.g., safe browsing lists, are ongoing, at the end of the day all the built-in security in the world can’t compensate for a bad user decision. Thus, responsibility for security has expanded beyond the enterprise and toward the individual.

The risk-based approach in privacy implies an expansion in the opposite direction.

However, the devil is always in the details, and that’s where things could get dicey. It’s easy for most of us to agree that enterprises should take greater responsibility for addressing privacy risks. What’s harder is figuring out just how that should translate into practice. And this is where the second narrative comes into play.

That narrative, which predates Big Data as such and extends to the Internet of Things (IoT), threatens to implement a shift in responsibility rather than an expansion by effectively replacing one risk model; i.e., FIPPs, with a different risk model, courtesy of the risk-based approach. Because, as the narrative goes, since notice and consent don’t work very well as is, and will work even less well in the brave new world of Big Data and the IoT, and most of the other FIPPs are going to be problematic as well now that we think about it, we’ll take over most of the responsibility for your privacy. No need to thank us.

And there’s the rub.

What I keep hearing in much of the rhetoric, both explicitly and implicitly, amounts to exchanging one privacy risk model for another. But risk management failures do not in and of themselves invalidate the underlying risk model. Especially when the privacy profession itself has facilitated that dysfunction—privacy notices, as distinct from transparency, being Exhibit A.

The problem here is not with the risk-based approach per se. Indeed, I’ve been doing work in this vein for several years now. The issue is that risk-management problems are being used as a stalking horse for taking a cleaver to an increasingly inconvenient—for enterprises—privacy risk model. Better appreciating the responsibility of users hasn’t caused us to swap security risk models; rather, we’ve augmented them. Similarly, in the case of privacy, we need to augment our existing risk models to reflect the increased responsibility of enterprises rather than using poor execution as an excuse to undermine a model that might cramp the style of Big Data masters of the universe.

This is a fundamental distinction. We need additional privacy risk models and far better privacy risk management. But let’s stop blaming the deficiencies of the latter on the former.

Comments

If you want to comment on this post, you need to login.