In a policy area so thorny as privacy and data protection, beguilingly simple terms are sometimes coined that appear to encapsulate important principles. Privacy by design, notice and consent and the right to be forgotten are cases in point. At their best, such phrases cut through complexity whilst maintaining enough intuitive sense to be useful; at their worst, they create additional confusion and exacerbate public misunderstanding.
The latest example is “surprise minimisation.”
I first heard it used at the 35th International Conference of Data Protection and Privacy Commissioners in September 2013 (the term was apparently coined by the California attorney general). The basic idea is that data controllers should avoid using data in ways which would “surprise” their data subjects. The phrase made it into the 'Warsaw declaration on the ‘appification’ of society,’' which was signed at the conference, and has resurfaced in various places since.
It sounds helpful, intuitive and familiar—a little like the well-known “data minimisation” principle. But if we dig beneath the surface, I suspect this term ends up establishing a rather misguided new principle, or else it dissolves into a combination of already-established, somewhat contradictory and equally controversial principles.
On its own, surprise minimisation suggests that practices users find surprising are necessarily bad, whilst those they fully expect are perfectly good. Neither is true.
Many of my favourite applications do helpful, insightful or fun things with my data that I as a user couldn't anticipate, and I don't necessarily want these surprises to be minimised. On the other hand, many applications I use begrudgingly, fully expecting them to use my data in ways I'd prefer they didn't. For instance, I'm not at all surprised by a social network using my data for behavioural targeting or sharing it with a range of unspecified third parties; that doesn't necessarily make these practices okay. And what does or doesn't surprise one individual may not be the same for another.
Grounding a general principle like this by reference to the vaguely defined feelings of an idiosyncratic group seems like a recipe for disaster, especially given that “surprise” may only weakly correlate with what users find acceptable or not.
Can we find a more sensible formulation of surprise minimisation?
The Warsaw declaration goes on to explain the concept as "no hidden features, nor unverifiable background data collection.” The California attorney general unpacks it as “enhanced measures to alert users and give them control over data practices that are not related to an app’s basic functionality or that involve sensitive information.” Neither of these definitions actually mention surprise, and both of them read more like a repackaging of existing principles like transparency and notice and choice. If anything, surprise minimisation sounds closer to the existing notion, established in U.S. law, of "reasonable expectations of privacy" (Katz v. United States, 389 U.S. 347, 360–61 (1967) (Harlan, J., concurring)).
I'm left struggling to see the point of introducing yet another term in an already jargon-filled debate. Taken at face-value, recommending surprise minimisation seems no better than simply saying “don't use data in ways people might not like”—if anything, it's worse because it unhelpfully equates surprise with objection, and vice-versa. The available elaborations of the concept don't add much either, as they seem to boil down to an ill-defined mixture of existing principles.
If the new principle actually proves helpful to anyone in practice, I'll be surprised.
If you want to comment on this post, you need to login.