TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | What privacy pros can take away from Uber's Greyball Related reading: Thoughts on behavioral advertising, Meta and privacy

rss_feed

""

""

Just because it's legal, doesn't mean it's not stupid.

That's an adage that is hopefully familiar to all privacy professionals. It's certainly something we talk a lot about in the office. I think we're learning that, when taken on as a corporate philosophy, applying ethical considerations to your products and services can go a long way in procuring user trust. An article from my colleague, Angelique Carson, CIPP/US, for example, on whether the profession needs an ethical code has generated a lot of interesting discussion surrounding a privacy pro's obligation to not simply advise a company regarding legality, but to advocate on behalf of the consumer. 

In particular, advanced technology is putting pressure on ethical considerations. Algorithms, machine learning, neural networks, deep learning, and all the other techniques often associated with the broader term we refer to as artificial intelligence are being employed by more and more companies around the world. 

They have the power to carve out efficiencies, personalize services, increase profits. Using and improving upon these technologies makes sense, but can they go too far?

Most certainly. 

The New York Times reported Friday on a program developed by Uber called Greyball. It combined data collected from its app and "other techniques" to locate, identify, and circumvent legal authorities. Through several means, the company surveilled government officials to avoid regulatory scrutiny and other law enforcement activity. 

To be fair, Times reporter Mike Isaac points out, "the practices and tools were partly born out of safety measures for drivers in certain countries." The company has said "it was also at risk from tactics by taxi and limousine companies in certain markets" and "Greyballing started as a way to scramble the locations" of drivers "to prevent competitors from finding them." 

But Greyball appears to have taken on measures that are legally and ethically dubious. Here are some examples: 

format_quoteOne method involved drawing a digital perimeter, or 'geofence,' around authorities' offices on a digital map of the city that Uber monitored. The company watched which people frequently opened and closed the app — a process internally called 'eyeballing' — around that location, which signified that the user might be associated with city agencies.

Other techniques included looking at the user's credit card information and whether that card was tied directly to an institution like a police credit union.

Enforcement officials involved in large-scale sting operations to catch Uber drivers also sometimes bought dozens of cellphones to create different accounts. To circumvent that tactic, Uber employees went to that city's local electronics store to look up device numbers of the cheapest mobile phones on sale, which were often the ones bought by city officials, whose budgets were not sizable.

If those clues were not enough to confirm a user's identity, Uber employees would search social media profiles and other available information online. Once a user was identified as law enforcement, Uber Greyballed him or her, tagging the user with a small piece of code that read Greyball followed by a string of numbers.

The company could then take that identified user, and "scramble a set of ghost cars inside a fake version of the app for that person or show no cars were available at all." They even went as far as instructing drivers to end a ride if an official got picked up! 

Scouring social media sites, attaching identifiable code, buying up burner phones, geofencing. These are all privacy-related issues that, though legal in some cases, go a long way in damaging consumer trust, not to mention the trust of law enforcement. Regulatory officials and law-enforcement officers are people with privacy rights, too. 

I'm not here to pick on Uber, however. There's plenty of other people out there to do that. My point here is that companies — perhaps the very company you're working for — may engage in activities that you could reasonably determine hurt the privacy of others, though they're not technically illegal. As a privacy professional, what's your obligation? Clearly, some employees at Uber felt they had to tell someone. 

The sheer power of technology, and its potential effect on large swaths of people — especially minorities and people of color — can be very dangerous. 

We saw it this past week in two separate stories. The Center for Democracy & Technology, together with 16 other organizations, have written to 50 or so data brokers asking them to pledge not to help build a Muslim registry for the Trump administration. Data brokers have incredibly powerful tools to categorize and identify individuals, the effects of which could be life altering for huge portions of our population, with deleterious effects on human rights. 

The Intercept also reported that Palantir — a company that produces powerful surveillance technology — is being used by the Trump administration to identify people for deportation. Of course we want to identify dangerous people and terrorists, but at what cost? If technology can be used to identify one type of person, it can be used to identify any one of us. 

Companies will understandably continue to develop the best technology they can, but some of that technology may well have the ability to hurt a lot of people. Privacy pros can and should be a voice of reason, an ethical compass, in how those technologies are deployed and used. 

photo credit: Hunky Punk In orbit: Seaweed "beach balls" on the coast of Sicily via photopin (license)

Comments

If you want to comment on this post, you need to login.