TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

United States Privacy Digest | Notes from the IAPP Publications Editor, May 18, 2018 Related reading: Evolving privacy law 'exciting' for IAPP Westin Scholar

rss_feed

""

Greetings from Portsmouth, NH!

It’s amazing how quickly digital technology has enmeshed itself into our lives. When the Obama administration started studying how big data analytics and artificial intelligence would affect the country and workforce just a few years ago, it seemed, in part, theoretical. True, big data analytics has been around for a while, but it, along with what we call AI, takes on a greater role each year as we use more internet-of-things devices. Smartphones are practically ubiquitous, modern cars are internet-connected, and homes and even cities are becoming “smarter.” Heck, even e-cigarettes can allegedly track smoking habits! Social media and the news cycle now contend with adversarial manipulation — I’m talking Russian bots here — making those filter bubbles we warned about years ago all the more dangerous and weaponized.

This all goes to demonstrate the important role ethics must play in the development of and decisions around the use of digital technology and online platforms.

One story that stuck out to me this week was a report from The Washington Post on a decision in recent months by Immigration and Customs Enforcement to end — at least for now — its pursuit of machine-learning technology that could assist in President Donald Trump’s call for more “extreme vetting” of immigrants and visitors to the country. The goal, according to the report, was to onboard technology that could determine whether a person might commit a criminal or terrorist act, or decide whether the individual is a “positively contributing member of society.” The potential technology would mine data on the internet, including social media posts, to make these predictions, in order to flag people for deportation and visa denials.

The agency, however, decided, after gathering “information from industry professionals and other government agencies on current technological capabilities,” to shift “from a technology-based contract to a labor contract.” Why? The “significant cost” of such a technology, plus “a cumbersome internal-review process to ensure it would not trigger privacy or other legal violations” or be redundant with other existing technology.

This is a small, but perhaps, short-lived victory for privacy in the U.S. Placing important life decisions in the hands of obfuscated algorithms with little public oversight is bad for our democracy.

Just look at what happened in the U.K., where facial recognition technology was used as part of surveillance of last year’s UEFA Champions League final in Cardiff City. According to South Wales Police, more than 2,000 people were wrongly identified as potential criminals at the game. That was a 92 percent false-positive rate.

Whether we’re talking about government or private industry, use of technology or industry use, the use of big data and AI technology is happening in the real world every day and it’s affecting peoples’ lives on an ever-growing scale. This isn’t just about individual privacy; it’s about the health of our society as a whole. As evidenced by the WaPo report on ICE, privacy rules may have played a role in preventing the potential use of a powerful predictive analytics tool. Let’s hope the sophistication of privacy rules and the privacy pros who shepherd important decisions within organizations continue to grow as fast as the technologies emerging throughout the world.

Comments

If you want to comment on this post, you need to login.