TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | The Truth Is Out There: Compliance and Security Are Not Enough Related reading: Consent and Forgetting: What Privacy Pros Can Learn from One Family’s Unexpected Experience




Facebook’s announcement yesterday of newly adopted guidelines and a review board for data research projects is the latest evidence of a new privacy truth: Legal compliance and sound security practices are not sufficient to meet societal expectations. With companies becoming laboratories for big data research, data ethics have become a critical component of corporate governance frameworks. Companies cannot view privacy as a compliance matter to be addressed by legal departments or a technical issue handled by IT. Rather, to avert public embarrassment and consumer backlash, they must consider ethical review processes and instill issue-spotting skills in employees throughout the organization.

Facebook’s move was driven by the criticism leveled against its experiment on user feelings by tweaking their News Feeds. Facebook did not break any laws. In fact, it has shown that it complied with its data use policy. Data security was not breached. And yet users, privacy advocates and the media were up in arms. As a market innovator – and a company whose very business is individual data – Facebook blazes a trail that companies in every industry will soon follow: creating internal standards for ethical data management.

The ethical implications of big data analysis can be elusive. Consider a story recently published in Vox about a researcher with a PhD in biology who underwent genetic testing using the 23andme kit. One of the features of the service, enabled through customers’ opt-in consent, is a social media-type function that connects people based on the results of their genomic tests, including people who are directly related to them. Despite his expertise in biology, the researcher did not expect that one of the discoveries from the test would be a half-brother, sired by his father in secret. This led to family turmoil, culminating in his parents’ divorce. Internal review of the social (and familial) implications of the data matching service may have averted this tangible privacy harm. The result of such review need not have been disabling the feature. But perhaps users could have been viscerally alerted about its potential ramifications.

The line between wholesome and untoward practices is fickle. Even as Facebook was being castigated for its research, OK Cupid’s co-founder Christian Rudder proudly announced in a blog, “We experiment on human beings.” He reported that the online matchmaking service had sent users on algorithmically determined “bad dates” in order to gauge whether the “conversion rate” of such dates would be worse than of “good” ones. Not only was there no customer backlash, but Mr. Rudder has even published a New York Times best selling book to describe his big data research. Indeed, Facebook itself has engaged in user data based experimentation in the past, boosting the proportion of organ donations among users by adjusting their profile settings to allow them to publicly display their status as organ donors. It is difficult to conjecture why one experiment is received more favorably than another. Perhaps users of online dating sites are more experimental in nature or cavalier about privacy; or maybe the information they post is not as accurate as that posted on Facebook.

The lesson is clear: To navigate the treacherous waters of big data practices, companies must develop internal competency for ethical review. To do so, they must train personnel to spot privacy issues and flag concerns. Managing privacy and ethical data practices within an organization entails becoming steeped in a growing interdisciplinary body of knowledge, and maintaining a firm grasp of new developments in technology, business and law. Sound data management practices are not common knowledge. They require extensive training, continuous education and a verifiable method of certifying skills.

Indeed, just as qualified civil engineers build bridges and certified dentists perform root canals, so too should ethical data management be entrusted to duly qualified, adequately trained professionals who have the skillset needed to make decisions that impact the lives of individual consumers.


If you want to comment on this post, you need to login.

  • comment Craig • Oct 3, 2014
    +1. OTA has been advocating this for the past 5 years.   Industry must move from a check box compliance mentality to the long term view of stewardship.  This is good for their brand and most importantly good for consumers.  The challenge is this will be a sea change for trade orgs and others who defend the wild west.
  • comment James • Oct 3, 2014
    I completely agree with the main claim, that compliance is not enough. However, I was disappointed at how the article ended with what looked like a plug for the IAPP's certifications. Is the IAPP now claiming that it teaches applied ethics? Applied ethicists in health and other domains tend to have formal university education in relevant areas (along with peer reviewed publications), and not simple certificates that require only multiple choice exams to obtain.
  • comment Malcolm • Oct 3, 2014
    This has been coming for a very long time: as we made clear when I was Privacy Commissioner of Australia more than a decade ago: "Good Privacy is Good Business".  What has surprised me is how long it has taken for this to be recognised by business.  As Cam Kerry noted in August at, It's "not really about what the company can do but what it should do".  If businesses that want to explore the frontiers don't establish credible ethics processes, they run the serious risk of having them imposed and when that happens will have nobody to blame but themselves.
    Malcolm Crompton
    Managing Director
    Information Integrity Solutions
  • comment Stephen • Oct 3, 2014
    Sorry, Facebook's announcement isn't much evidence of anything at all.  The NYT article that you link to above says in its third paragraph: 
    <i>But no outsiders will be invited to review Facebook’s research projects, and the company declined to disclose what guidelines it would use to decide whether research was appropriate. Nor did it indicate whether it would seek consent from users for projects like the emotion study ... </i>
    Facebook claimed of its A-B emotion study that informed consent was implied in the Data Usage Agreement.  Buried within thousands of words is a sentence <i>fragment</i> that gives Facebook license to do research.  Contrast this with the consent forms of serious human experimentation where researchers are at pains to make sure subjects are in no doubt whatsoever what's being done to them.  
    So what is Facebook really proposing to do differently? We just don't know yet.
  • comment Ruby • Oct 6, 2014
    I totally agree.  The legal dept. can't go it alone.  A tone from the top is necessary, agreement on the goal, and where you want to draw the line on the ethics behind your brand.