TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Was This a Week of "Tangible" Privacy Harm? Related reading: European Commission issues guidance on Data Act

rss_feed

""

Two events this week got me thinking of privacy harms. Now, I know the mere mention of “privacy harms” brings with it a lot of baggage and a ton of research, legal uncertainty, opinion and, well…ambiguity. I couldn’t possibly link to all the countless scholars, lawyers and activists who have tackled the question of what does, or does not, count as a harm, but I couldn’t help think that we may have seen some tangible harms we can all agree on this week.

In the privacy community, perhaps the most obvious incident involved OfficeMax. It was heavily reported that a couple who lost their daughter in a car crash last year received a piece of mail from the office supply chain with “Daughter Killed In A Car Crash Or Current Business” on the second line of the address.

I cannot imagine what they must have felt.

Prof. Ryan Calo put it this way:

Let that sink in for a second. A father lost his daughter and OfficeMax or its provider used this fact to select how to market to him. This happens all of the time. But in this case, OfficeMax wrote what it was doing on the envelope, reducing the Seay family to some category peddled by data brokers. On paper, for the world to see.

Can we safely say this was a tangible harm?

Most, I think it’s safe to say, would agree this was traumatic for the family and certainly gets the spotlight already shining on the data broker industry to glow intensely hot. According to OfficeMax, they “unintentionally” purchased the information from a third-party data broker. As the IAPP’s Trevor Hughes, CIPP, pointed out, this could have been a case where the company was trying to be sympathetic. But without any transparency from much of the data broker industry, as Calo said, being placed within a highly sensitive marketing column like that is beyond Kafkaesque.

If you’ve read one of my past posts on Big Data drinking games, you’ll know I am deliberately trying to avoid using the word creepy here. A couple years back, scholar and legal expert Adam Thierer wrote an excellent piece “On ‘Creepiness’ as the Standard of Review in Privacy Debates.” In it, he writes:

I think we’d be better served by moving privacy deliberations—at least the legal ones—away from “creepiness” and toward a serious discussion about what constitutes actual harm in these contexts. While there will be plenty of subjective squabbles over the definition of “harm” as it relates to online privacy/reputation, I believe we can be more concrete about it than continuing these silly debates about “creepiness,” which could not possibly be any more open-ended and subjective. 

So is the above OfficeMax incident an example of actual harm? I’d like to hear your thoughts on this.

The second event this week that caught my attention took place in the chaotic, violent protests in Kiev, Ukraine. If you were in a specific geolocation with a smartphone, you received this message: “Dear subscriber, you are registered as a participant in a mass riot.” The nation recently passed harsh laws against public gatherings—including jail sentences for up to 15 years. Putting the tenets of democracy aside, the geolocation of an individual in the Ukraine pretty obviously puts them in serious peril in this case, regardless of whether they’re breaking a law (an anti-democratic law, at that) or not.

Obviously I support the right of the protestors to assemble, but what if you were just a random person going about your day on the side of the street and received that text message? Does the dark pit of fear that developed in your stomach count as harm? What if you were falsely accused based simply on your phone’s location?

Though this is an extreme case, it gives ammunition to those who argue thatdata accuracy and state surveillance put everyone in potential peril. By no means am I arguing that the U.S. will eventually end up in a similar situation. But the NSA disclosures have generated a lot of mistrust of the government and U.S. businesses around the world. It’s also important, in my opinion, to look at these two situations—one in the private sector with the data broker industry, the other in the public sector with state surveillance—and really think about what kind of society we want to live in and what kind of landscape we are facing.

Do the benefits really outweigh the potential harms?

In a recent interview with PBS Newshour about the balance of privacy and Big Data, Future of Privacy Forum Director Jules Polonetsky, CIPP/US, said, “If companies want to be intimate with us, they need to be transparent with us.” I think something similar could be said about the government. If the government wants to protect us, they need to be accountable to us.

This lack of transparency and accountability only breeds mistrust—whether it’s manifest in headlines or in the streets—and could ultimately lead to real harm, not just to individual consumers and protestors, but to entire businesses and governments alike.

9 Comments

If you want to comment on this post, you need to login.

  • comment Name • Jan 27, 2014
    A thoughtful post, Jed.  I agree that we need a more nuanced and realistic definition of harm from data breaches.  Professor Dan Solove posted a good LinkedIn article today making the point that there is a real cost related to replacing one's credit card (automatic charges, etc.).  Personally, I have sworn off store-issued cards, so I will also be missing out on those discounts and loyalty benefits in the future.  I also used to enjoy that Target donated a percentage of my purchases to the school of my choice under its "Take Charge of Education" program.  I even considered its value in calculating my annual donation to our school because of this secondary contribution.  Now I don't. 
    http://www.linkedin.com/today/post/article/20140127051213-2259773-4-points-about-the-target-breach-and-data-security?trk=mp-reader-card
  • comment Annie C. Bai • Jan 27, 2014
    I forgot to include my name in the comment above.
  • comment Joseph Jerome • Jan 27, 2014
    I don't want to unduly bag on industry, but we're stuck with this ill-defined "creepy" metric largely because industry (and government in its spheres, as well) doesn't want to concede that anything it's doing is wrong.  Serious thought leaders have started arguing about overhauling the FIPPs and focusing “on use,” but then have little to say about what use limitations that might actually want enforced.  I have serious doubts that industry will agree that collecting “dead children” data is a bad idea, or providing any transparency into why they want to use that data.  Industry, as makes sense, is putting a dollar figure on individual privacy, which makes sense to a degree, but individuals don’t think about their privacy like that, as much as many in our field want to put a dollar value to the principle.  While many are benefitting from all of this, the rare individual will feel exploited, manipulated, and yes, creeped out.  Education might alleviate some of this, but it’s not going to get rid of the need for some fundamental value judgments that industry may not wish to consider.
  • comment Michael Hobbs • Jan 27, 2014
    Trauma causes injury, whether it is blunt trauma or psychological trauma.  In both cases, there is material harm to the person, harm that may require medical attention, and which may leave permanent scars.  PTSD is a far worse injury than the skin and muscle damage that might be caused by a small piece of shrapnel in the buttocks.  I feel that you have failed to correctly assess the damage that was caused by the OfficeMax incident.  And a lack of intent to injure does not excuse things at all, in my opinion.
  • comment Adam Thierer • Jan 28, 2014
    Thanks for your post, Jed. You raise some excellent points here worthy of further discussion. 
    
    Is the OfficeMax case a clear cut of privacy harm? I certainly think we have a clear case of infliction of emotional distress here, but a much less clear cut case of *intentional* infliction of emotional distress. And, so, it is easy to conclude that what has happened in this case is incredibly troubling and ethically repugnant, but it is much less easy to conclude that it is legally actionable. 
    
    It’s tempting to say that companies should not collect this sort of data at all so that something like this cannot happen in the future. But, as Trevor and others have noted, companies may collection information about deceased individuals to make sure that they do not market products to them or their families and to avoid outcomes exactly like this one. Moreover, data about deaths is a matter of public record, so it’s hard to know how we’d even craft new rules in this regard. 
    
    In any event, even if we can’t definitively say whether there is an actionable legal privacy harm in this particular case, everyone would agree that more can be done by data collectors to make sure things like this do not happen going forward. The need for serious discussion about data ethics and digital citizenship is more pressing than ever. Getting more IAPP-trained privacy professionals at work throughout the corporate food chain is essential. It’s worth exploring proposals for data ethicists and internal privacy review boards, too. 
    
    The discussion about privacy by design will, no doubt, raise tensions within many companies since the push to expand innovative data services will continue but will also raise all new privacy and security tensions in the process. The need for balance will continue but, in time, I believe we’ll begin to move toward rough consensus about data best practices that can guard against particularly sensitive practices.
     
    Of course, law will continue to be necessary at the margin when egregious data security and privacy violations occur. I prefer those sanctions come after the fact and be limited to more clear-cut cases of harm that we have already established as actionable through privacy torts as well as targeted FTC enforcement. Alas, that’s all still very murky, too. 
    
    There are no easy answers when it comes to defining privacy harms because privacy, by its very nature, defies such easy conceptualization and categorization. As that policy conversation continues, however, I hope we can all at least agree that it is worth expanding the more general conversation about corporate best practices and digital design ethics.  We can make progress here by pushing harder on those fronts. 
    
     -- Adam Thierer 
    
  • comment Jedidiah Bracy • Jan 30, 2014
    Thanks, Annie. Prof. Solove's thoughts on harm are interesting here. The Hobson's choice for courts poses a real challenge: Recognize a harm and a torrent of class-actions threaten to pose new harm to businesses; not recognizing a harm "is bad, too, because there really is harm, and it needs to be appropriately deterred and redressed." I look forward to reading it his future posts on this.   
  • comment Jedidiah Bracy • Jan 30, 2014
    Thanks for your thoughts on this, Joseph. Where will these fundamental value judgements come from? Government? Interested to hear your thoughts. 
  • comment Jedidiah Bracy • Jan 30, 2014
    Thanks for your comment, Michael. So what kind of redress should the Seay family have after this incident? What metrics should we use to correctly assess such harm? Who gets to decide those metrics? I agree, the event must have been traumatizing, and I wouldn't wish it upon anyone. There is a pretty good chance that OfficeMax and the third party involved had collected that data in order to be sensitive to the family--like not giving them an ad for The Fast and the Furious. The fact that such data is collected in the first place is disturbing, but data often = money. Julia Angwin has just posted a comprehensive blog post on data brokers and how to opt-out of their collection (http://juliaangwin.com/privacy-tools/). But there are so many, and the process usually involves identifying yourself further for authentication purposes. There's definitely a lot of bad actors out there, but the chances of getting any baseline legislation out of Washington is currently slim, in my opinion. Some data brokers, such as Acxiom, are offering some transparency, but as the FTC's Julie Brill has said, more needs to be done. Industry can do more, and I hope they do. Public outcry may help budge some of the industry into being more transparent, but outside of some sort of government mandate, I'm not optimistic. The FTC and Sen. Rockefeller are certainly trying though. Perhaps there's room for some optimism.  
  • comment Richard Beaumont • Jan 31, 2014
    I agree harm is difficult to define - but the two cases seem pretty clear cut to me.
    In the case of 'Daughter killed..' there was clearly a systems failure that resulted in the data category being printed out - which was perhaps as much the cause of harm as anything else.  Failure to test systems could be seen as negligent when dealing with sensitive data.
    I would also consider these issues:
    In more normal circumstances, if I receive advertising based on incorrect assumptions of my interests as a result of data collection practices, two things happen:
    
    1. My time is potentially wasted
    2. I may have missed out on offers I am interested in.
    
    Can a case for harm be built on that?