TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

United States Privacy Digest | Notes from the IAPP Publications Editor, Dec. 15, 2017 Related reading: OCR issues rule for reproductive health care under HIPAA

rss_feed

""

""

Greetings from Portsmouth, NH!

As we near the end of the year, you can be sure the Federal Trade Commission has hosted a roundtable touching upon privacy. They seem to do so just about every December. I remember going to my first FTC roundtable in 2011: a full day of discussion and debate around the promise and peril of facial recognition and detection technology. It was a burgeoning field six years ago, but with the introduction of Apple's FaceID in its new iPhone X, as well as its use in airports around the world, facial recognition and detection is quickly becoming a normal part of society.

Of course, its use as a biometric authenticator has real value in securing our phones and assisting national security, but as was forewarned at the FTC event near the beginning of this decade, facial recognition can be used to enhance a surveillance state, too. We saw first-hand evidence this week of this effect from BBC News journalist John Sudworth. China is currently building one of the world's largest and most sophisticated CCTV camera systems. Many such cameras feature facial recognition technology and artificial intelligence. The government's network includes a picture of every resident. In a short video, we see Sudworth provide authorities with his headshot before going out into public to see how long it would take the system to find him. 

It took seven minutes. 

Of course the privacy implications are self-evident. For one, there is no longer any semblance of anonymity in public. Chinese authorities say they only use the system to locate criminals or to help people, and though that may be true, it doesn't take much to imagine how this system could serve nefarious, anti-democratic purposes. How soon will such a system make its way to other countries, including the U.S.? 

This year's FTC roundtable, however, touched upon a hugely important concept in U.S. privacy law: the difficulty in defining "harm" or "injury" in data breach cases, particularly when no economic damage has transpired. The Privacy Advisor Editor Angelique Carson was on-site for this week's "informational injury" roundtable and does a fantastic job summarizing the day's discussions. As she points out, a "hypothetical" injury is "not so hypothetical to a victim [of domestic violence] whose home address may be floating in the ether." Likewise, as panelist Lauren Smith pointed out, automated-decision making, "frequently employed by marketers, law enforcement agencies, and human resource departments, among others, can negatively impact the consumer." 

Angelique also reports on how the legal concept of "harm" has evolved in U.S. courts. Defining and measuring harm in the legal system is complex, and as she points out, the FTC has a "difficult job ... ahead of it in determining at what point it may step in to regulate business practices that may cause injury under the unfairness standard in Section 5 of the FTC Act." 

Since 2011, I've been fascinated by these roundtables. They serve as a robust information-gathering crash course on significant privacy issues, but they also point toward the future and what will come next. Where will the court system be in determining informational injury six years from now? With the inevitable normalization of advanced technology, like facial recognition, artificial intelligence, and automatic-decision making, measuring harm in the legal system will continue to be a significant issue for disadvantaged communities, those who are vulnerable, and by extension, our society as a whole. 

You can rest assured that we here at the IAPP will continue to cover these important topics as they evolve.  

Comments

If you want to comment on this post, you need to login.