TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Customers in the Maze: Ethics Heat Up the Flatirons Related reading: What "Ubergate" Means for Privacy Pros

rss_feed

""

Remember the good old days when managing privacy meant wrapping your mind around just defining “personal information”; distinguishing de-identification from pseudonymization; dealing with an alphabet soup of laws and regulations including HIPAA, FERPA, COPPA, GLBA, VPPA, DPPA, FCRA, FTCA—not to mention 47 state data breach laws, and controlling for the EU DPD with its SCC and BCR? And that was just to start?

Those were the good old days.

Over the past couple of years it has become clear that legal compliance and reasonable IT measures are simply not enough. Companies are finding that just because their actions may be legal doesn't mean they're not stupid, immoral, unethical or just plain wrong.

Corporate officers struggle to decipher subtle social norms to make ethical choices that are more befitting of philosophers than business managers or even lawyers.

The ethical dilemmas arising from data practices transcend privacy and trigger concerns about stigmatization, discrimination, human subject research, algorithmic decision-making and filter bubbles. And the trajectory is toward more and deeper ethical dilemmas, as big data analysis and the deployment of the Internet of Things pervade every aspect of our lives—not only online but also in physical spaces like our workplace, public transportation, shopping malls and homes.

Last week, Paul Ohm of University of Colorado and Ryan Calo of the University of Washington convened an interdisciplinary group of ethicists, sociologists, psychologists, engineers, computer scientists, lawyers and business managers to discuss this brave new world of data ethics at the Silicon Flatirons event When Companies Study Their Customers: The Changing Face of Science, Research and Ethics, at which I spoke.

Ohm has a knack for identifying the burning issue of the day and gathering the top minds in the field to debate it. Over the past few years, he’s organized symposia dedicated to identifying privacy harm, the technology of privacy and the economics of privacy. He explained that he chose the latest topic following the public outrage around news of Facebook’s “emotional contagion study,” a large-scale experiment conducted by researchers into the effects of tweaking the dosage of positive or negative comments on a user’s news feed. Some commentators scorned Facebook’s actions as an unruly experiment on human subjects without their knowledge or informed consent. Kashmir Hill and James Grimmelmann complained about human beings being treated like guinea pigs. Others suggested that A/B testing and sentiment analysis have always been a staple of the online environment and that Facebook did little wrong, save perhaps assign a misunderstood title to its research project.

The first session of the conference pitted two of the leading voices in the respective camps, Zeynep Tufekci and Tal Yarkoni. Tufekci, a professor of Sociology at the University of North Carolina iSchool, was indignant, arguing that data science gives today’s businesses tremendous power over and insight into individuals’ lives. Companies know about their users’ one-night stands, with whom they associate, what they eat, think or buy. They have the power to sway elections, ferment revolution and craft public sentiment all over the world. The infrastructure created today, Tufekci warns, will determine the interactions between individuals, machines, businesses and governments for generations to come. It is reckless to leave the architecture of this terrain, which is rich with moral and ethical values, unregulated and subject to the best intentions of 20-something-year-old entrepreneurs.  

Yarkoni, professor at the Department of Psychology at the University of Texas, was more sanguine. While admitting that the technological data-crunching capabilities are fundamentally different today than they were a few years ago, he argues that the ethical implications are not so profound. Companies have been experimenting on consumers for years, for example, by tweaking the design of grocery stores to place certain items at eye level and impulse-buy products near the checkout counter. The question, says Yarkoni, is not whether businesses manipulate consumers but rather what are the relative benefits and costs of such actions.

Interestingly, as Ohm observed, both scholars base their views on a similar premise—that the online world is a contrived environment staged and steered by algorithms and machines. For Yarkoni, this is a reason to shrug off apprehension over manipulation. To use Captain Renault’s protest as he walked into the casino in “Casablanca,” Yarkoni isn’t shocked to find that gambling is going on in this casino. At the same time, for Tufekci, the artificial nature of the platform is cause for even greater concern. As she explains, in the past a retailer could only affect the activity of a shopper at a specific point in time when she entered a store, whereas today, online platforms create an entire virtual reality for users, governing their political and media content and the products and services they can access and see.

Rob Sherman, the deputy chief privacy officer of Facebook, bravely showed up to debate the challenges and dilemmas facing the company as it tries to determine the right thing to do in what is largely uncharted terrain. Sherman introduced the structure and some of the principles underlying the activity of its newly assembled data research review board. As Sherman said, while there's been a lot of reaction to the Facebook study, there’s little agreement surrounding what's the right thing to do.

The day’s final session addressed possible institutional responses to new data ethics challenges. As Jules Polonetsky, CIPP/US, and I wrote, to distinguish between right and wrong data innovation, we often rely on visceral gut feelings. This explains how the word “creepy” became something of a term of art in the privacy world. Writing about moral decision making, The New York TimesDavid Brooks suggested, “Think of what happens when you put a new food into your mouth. You don’t have to decide if it’s disgusting. You just know. You don’t have to decide if a landscape is beautiful. You just know. Moral judgments are like that. They are rapid intuitive decisions and involve the emotion-processing parts of the brain.”

Alas, this approach doesn’t scale. Companies would be ill-advised to appoint a rabbi, priest or philosopher to gauge whether they are crossing moral and ethical boundaries. Rather, as Calo foresaw a couple of years ago, data ethics review committees, or consumer subject review boards, could be charged with vetting data projects that involve innovative or surprising data uses. Such boards would address questions such as whether data research can be conducted without subject consent and, if so, whether subjects must be informed in retrospect; how to account for a possibility of harm and measure benefits to individuals and society at large; whether there are alternative mechanisms that could lessen the impact while achieving similar results; whether research can focus on vulnerable populations, and what the guidelines are for collaborating with academic researchers. To avoid concerns about transparency, such boards would comprise not only corporate insiders but also outside experts, whose expertise and compensation scheme would assure professionalism and independence.

In this context, Federal Trade Commissioner Julie Brill emphasized that a risk-based approach to regulation will not suffice. Companies cannot be left to their own devices to determine the scope of legitimate data uses. Rather, traditional principles, such as data minimization and de-identification, must be followed in conjunction with internal corporate processes to help calibrate ethical constraints to individuals’ preferences and expectations. In a similar vein, Tufekci and Northwestern’s Brayden King argue in The New York Times that we need to create "information fiduciaries," independent, external bodies that oversee how data is used, backed by laws that ensure that individuals can see, correct and opt out of data collection.

These sorts of review boards and panels are already prevalent in the government-funded academic and medical spheres. The challenge we now face is to adapt them to the freewheeling start-up culture of data innovation.

photo credit: Zach Dischner via photopin cc

Comments

If you want to comment on this post, you need to login.