TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | About Those Facial Recognition Experiments at Your Nearest Music Festival Related reading: Notes from the IAPP Publications Editor, Oct. 19, 2018

rss_feed
DPC18_Web_300x250-COPY
GDPR-Ready_300x250-Ad

From the Monterey Pop Festival and Woodstock in the 1960s to Coachella and Lollapalooza today, summer is a great time to enjoy a collection of your favorite bands with friends and fellow music fans. Also, there’s the partying. But that’s your own business, right?

Well, perhaps not. (Hey, look, Malia Obama was at Lollapalooza last weekend.)

Dig Boston posted an investigative report on “a sophisticated new event-monitoring platform” that was used during two Boston Calling music events, one each in May and September of 2013. According to the report, the city of Boston, MA, used software and other equipment that “gave authorities a live and detailed bird’s eye view of concertgoers, pedestrians and vehicles” near City Hall. Using at least 10 smart cameras, authorities were able to locate suspicious activity, screen individuals for identification and pursue real-time video analytics.

Documents unearthed by the reporters at Dig include plans to use “Face Capture” on “every person” at the Boston Calling concert while another sensitive document defined a person of interest “as anyone who walks through the door.”

This Intelligent Operations Center, designed and licensed by IBM, integrated its Smart Surveillance System with its Intelligent Video Analytics.

Here’s a video example of the video analysis:

Was a reasonable expectation of privacy being violated by the City of Boston here?

I would say a resounding yes. And I say that a year-and-a-half after this great city underwent the senseless and utterly disgusting Boston Marathon bombing. In fact, according to the report, a beta phase of the system was used during that terrible day.

Video images of the bombers certainly helped flush the terrorist brothers out of hiding, and that’s a good thing, but there really should be a level of transparency with this kind of surveillance.

As the Dig report notes, it was not a secret that the city had teamed up with IBM as part of its Smarter Cities initiative to help the city “engage its citizens and more efficiently deliver municipal services” as well as “traffic management” and to cultivate a “healthier environment.” But it does not appear that folks attending the Boston Calling music festival knew it included cataloging every one of the concertgoers.

Don’t get me wrong, I certainly applaud initiatives to improve efficiencies—both for ease of movement and improved lifestyle—but living in a panopticon is no way for a democracy to thrive. And though the National Telecommunications and Information Administration (NTIA) is currently hammering out a code of conduct for the commercial use of facial recognition (check out my colleague Angelique Carson’s coverage here), such a code would not apply in this case because it’s government use. Plus, in one of the latest NTIA hearings, one representative body in the biometrics industry made the assumption that individuals lose their right to anonymity in public. The backlash against the group’s comments clearly demonstrated that many disagree with such thinking.

When Dig reached out to the Boston Police Department (BPD), a spokesman denied the agency was part of such an initiative. The report claims otherwise. If true, the BPD missed an opportunity to come clean and help citizens understand how this technology works and why it’s being used.

Eventually, the mayor’s office admitted the city used the pilot program with IBM. The mayor’s press secretary explained, “The purpose of the pilot was to evaluate software that could make it easier for the city to host large, public events, looking at challenges such as permitting, basic services, crowd and traffic management, public safety and citizen engagement through social media and other channels.” The official also said the city does not plan any long-term use of the software because “we have not seen a clear use case for this software that held practical value for the city’s public safety needs.”

It’s good to know the mayor’s office was forthcoming, but why experiment with the public before receiving their input? Yes, at the cost of many concertgoers’ privacy, the city may have found that the technology wasn’t worth it. We’ve seen reaction to Facebook’s emotional response experiment and OKCupid’s. It’s clear folks don’t like having their privacy toyed with unbeknownst to them.

So-called “smart cities” are definitely here—and if they help me get from one side of the city to another more efficiently and save on fuel for municipal services, well, I think that’s great—but society needs to have a role here. City and law enforcement officials need to be more transparent about how they’re developing and implementing security at the expense of privacy. As an open society, people have a right to know and enact their beliefs here.

As we’ve seen with drone technology, the panic that comes from burgeoning technology—if not dealt with transparently—can cause a backlash that ultimately could take away many of the benefits and efficiencies provided by that very technology. I mean, look, even Kanye West is scared drones will electrocute his daughter.

I want to prevent another senseless bombing as much as the next person—I really do—but I also want to be able to enjoy, say, some live music without feeling like I’m in some Orwellian landscape. Society has a role to play in judging where this line between security and privacy resides.

Being transparent is a good and necessary first step.

Comments

If you want to comment on this post, you need to login.