TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Why privacy-risk analysis must not be harm-focused Related reading: Web con: 'Strategic Privacy by Design'

rss_feed

""

""

Much of my work around privacy by design involves disabusing lawyers of preconceptions of privacy “harms.”

For the better part of the last century, the jurisprudential focus in the U.S. has been on cognizable harms, or damages, resulting from statutory or common law privacy invasions. Courts almost invariably require a showing of damages — mostly financial — before a victim may be due some remedy. Because lawyers are trained in the law, and the bulk of privacy practitioners are lawyers, cognizable harms remain the common misconception about how to frame privacy risk.

But from a privacy-by-design perspective, harm prevention is not and should not be the primary goal. Rather, the goal should be a reduction in the incidences of privacy violations. Isn’t that synonymous with harm reduction, you may be asking?

Certainly, the recent proposal from Intel frames privacy risk (see 3(h)) in terms of adverse consequences, with a fairly expansive definition of them. But consequences are not risks and privacy risk needn’t involve harms. You might be familiar with the formulation of risk as a combination of likelihood and impact. In this formula, most people equate impact as harms, but as I illustrate below, these are not synonyms. 

Consider this example: I have decided, without your consent, to put a video camera in your home. Most people would agree that this is a violation of your privacy, an invasion into your sanctum. Under the Solove Taxonomy this would be a surveillance violation. If you were to discover or be made aware of this camera, you might alter your behavior to avoid being seen. You might be embarrassed or upset at what the camera previously revealed. A hypersensitive individual might recoil in horror and might even commit suicide.

But what if you never found out? What if I didn’t use the video in any way that might objectively harm you. I’m not selling it or disclosing it to another entity. I’m not using it to blackmail you or causing you to lose an opportunity, such as a loan or job. The surveillance is a violation, regardless of harms or damages to you. Even if you never found out or I never used it in a way that is disadvantageous to you, my installation of a camera in your home is not something society is willing to accept as appropriate behavior, as it violates our social norms in your personal and private space.  

Now some will argue that there is a potential that you could uncover the camera causing embarrassment or a change of behavior. There is a potential the video might get leaked, leading to a lost job opportunity or friends. Yes, these possibilities exist. But now we’re faced with a funny conundrum: What could I do to reduce these harms from taking place?

I could make the camera more obscure. I could encrypt the video feed. I could implement all sorts of controls that reduce the potential for the “harm” materializing. In other words, I could make it nearly impossible for you or others to find the camera or the video.

But none of this reduces the underlying violation — that of me invading your private space, surveilling your activities. In fact, these controls are antithetical to a core privacy principle: transparency. I’m hiding and obscuring my activity to prevent you from being harmed, but alas, I’m not doing anything to stop my actual violation of your privacy.

You may recall the incident with Target and their response to the issue of inferring a teenager’s pregnancy. The situation allegedly arose when Target used big data analytics to deduce that a customer was likely pregnant based on her purchases. They then proceeded to send mailers to the customer offering various newborn related products. This inadvertently alerted a household member of the teenage customer’s pregnancy.

Here we have two distinct violations: the secondary use of purchase information (to infer her pregnancy) and the disclosure to the household member. Target’s response? To obfuscate the inference by designing their flyers with a mix of products, only slightly skewed towards those appealing to pregnant women.

While this reduced the probability of future household participants from learning of a Target customer’s pregnancy via its targeted advertising, it did nothing to alleviate the violation of the secondary use of purchase information. Their mitigation actively defies the principle of transparency. Rather than sending a notice to the customer telling them “our analysis of your purchases suggests you are pregnant,” or better yet alerting them prior to or at the time of purchase, the company actively obscured their inference in the flyers they sent (note others have suggested a less nefarious reason).

Other privacy-risk approaches also create incentives for organizations to hide privacy violations.  Most “privacy” risk models are based on organizational risks (such as fines, lawsuits or reputational harm from being hauled before a legislative body). One recent example of obfuscation involves Facebook’s hiding their Messenger App’s collection of call data. As Damian Collins, member of the British Parliament, said, “To mitigate any bad PR, Facebook planned to make it as hard as possible for users to know that this was one of the underlying features"

For privacy-risk analysis to not create perverse anti-transparency incentives, one must incorporate a broader view of impact than harms.

In the risk framework I use in my book Strategic Privacy by Design, I do just this. The impact side of the equation incorporates both a notion of size of the affected population and secondary adverse consequences, which would be those financial and non-financial harms we’re most familiar with.

Thus, while obscurity might reduce the chance an individual suffers the effects of embarrassment from finding a camera in their house, obscurity has a counter effect on the population affected (i.e. people would view a more covert camera in people’s homes as more nefarious and offensive to social norms). If one knowingly invites my camera into their house, it’s no longer a privacy violation. Transparency, control and consent all have the effect of transforming the activity (a camera in a home) from a violative act to one that is not.

There is a further factor, beyond awareness and consent, I use to determine whether an activity (a camera in a home) constitutes a violation (surveillance), namely benefit. For instance, a baby or a person with a severe disability may not be aware of or be able to consent, but if they are the primary beneficiary of their monitoring and that benefit outweighed the lack of awareness and consent, this activity does not constitute a privacy violation.

This is what makes privacy harder than security, because often there are broader social policy questions at play.

With various U.S. legislative proposals, NIST's new privacy risk framework, and the recently formed project committee for ISO to create a standard in consumer products all in the works, stakeholders should recognize that privacy-risk analysis must not be harm-focused but rather focused on the violation of people’s privacy.

Privacy-risk analysis should induce violation reduction, not mask violations in the name of harm reduction.

photo credit: XoMEoX Surveillance via photopin (license)

Comments

If you want to comment on this post, you need to login.