TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | The Changing Nature of Privacy Practice Related reading: A regulatory roadmap to AI and privacy

rss_feed

""

Nothing has shown how fast privacy practice is evolving as much as Facebook’s recent controversy over its research experiment altering users' newsfeeds to see if different feeds made them more happy or sad.

Crafting privacy policies has been an exercise in drafting to describe uses of information in enough depth and detail to provide full disclosure but enough breadth and generality to leave room for other conceivable uses and also (as data use evolves rapidly) some not yet conceived. Facebook’s terms of service allow data use “for internal operations," including "data analysis, testing, research and service improvement,” so users who consent to these terms arguably consent to being research subjects.

This apparently was the basis on which researchers concluded no further consent was required. At the time of initial publication, they reported the use to be “consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook” and, thus, “informed consent” for purposes of research ethics.

The widespread criticism that followed prompted second thoughts.

The lead researcher posted an explanation on Facebook saying “we … are trying to improve our internal review practices.” The final research was published in the Proceedings of the National Academy of Sciences, accompanied by an editorial noting the controversy and expressing “concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out.” And the adequacy of consent for purposes of the EU Privacy Directive is being probed by Facebook's European regulator, the Irish Data Protection Authority.

But much of the criticism has less to do with parsing the terms of service or Facebook's privacy practices than with research ethics. As one the authors of the study reported taking away from the hundreds of emails he received after the work was published: “You can’t mess with my emotions. It’s like messing with me. It’s mind control.”

Numerous commenters have observed that Facebook, among many marketers (including political campaigns like U.S. President Barack Obama's), regularly conducts A-B tests and other research to measure how consumers respond to different products, messages and messengers. So what makes the Facebook-Cornell study different from what goes on all the time in an increasingly data-driven world? After all, the ability to conduct such testing continuously on a large scale is considered one of the special features of big data.

The answer calls for broader judgments than parsing the language of privacy policies or managing compliance with privacy laws and regulations. Existing legal tools such as notice-and-choice and use limitations are simply too narrow to address the array of issues presented and inform the judgment needed. Deciding whether Facebook ought to participate in research like its newsfeed study is not really about what the company can do but what it should do.

As Omer Tene and Jules Polonetsky, CIPP/US, point out in an article on Facebook's research study, “Increasingly, corporate officers find themselves struggling to decipher subtle social norms and make ethical choices that are more befitting of philosophers than business managers or lawyers.” They add, “Going forward, companies will need to create new processes, deploying a toolbox of innovative solutions to engender trust and mitigate normative friction.” Tene and Polonetsky themselves have proposed a number of such tools. In recent comments on Consumer Privacy Bill of Rights legislation filed with the Commerce Department, the Future of Privacy Forum (FPF) endorsed the use of internal review boards along the lines of those used in academia for human-subject research. The FPF also submitted an initial framework for benefit-risk analysis in the big data context “to understand whether assuming the risk is ethical, fair, legitimate and cost-effective.” Increasingly, companies and other institutions are bringing to bear more holistic review of privacy issues. Conferences and panels on big data research ethics are proliferating.

The expanding variety and complexity of data uses also call for a broader public policy approach. The Obama administration’s Consumer Privacy Bill of Rights (of which I was an architect) adapted existing Fair Information Practice Principles to a principles-based approach that is intended not as a formalistic checklist but as a set of principles that work holistically in ways that are “flexible” and “dynamic.” In turn, much of the commentary submitted to the Commerce Department on the Consumer Privacy Bill of Rights addressed the question of the relationship between these principles and a “responsible use framework” as discussed in the White House Big Data Report.

The Consumer Privacy Bill of Rights and a responsible use framework have in common that they move the focus of privacy practices toward outcomes, seeking to shift responsibility for protecting from individuals to institutions. A criticism of the Consumer Privacy Bill of Rights has been that its broad principles leave much to interpretation and application. But there are trade-offs between certainty and creativity, between precision and flexibility; the issues are too diverse for one size to fit all, and increasing responsibility for data controllers and processors also increases complexity. Greater volume, velocity and variety of data mean a greater volume, velocity and variety of challenges in managing data responsibly.

At a recent conference I attended at the MIT Media Lab, one speaker made a suggestion that resonated with the mixed audience of technologists, business and policymakers: that data scientists should receive training in ethics, power and responsibility. This parallels the recommendation of the President’s Council of Advisers on Science & Technology report on big data that “privacy is also an important component of ethics education for technology professionals.”

Until then, privacy professionals, lawyers and compliance officers are the first line of defense in bringing ethical and prudential judgment to bear. It is up to them to help define the boundaries.

Deciding whether Facebook ought to participate in the newsfeed research calls for standing in the shoes of Facebook users to consider what they reasonably would expect and what would be the impact on the relationship between the user and Facebook and trust in Facebook's service. At bottom, this is a question of context.

The Respect for Context principle in the Consumer Privacy Bill of Rights articulated in the White House privacy blueprint calls for "a right to expect that companies will collect, use and disclose personal data in ways that are consistent with ... both the relationship that they have with consumers and the context in which consumers originally disclosed the data ..." In summarizing this principle, I have often said it boils down to "no surprises." And as much as Facebook users might reasonably be expected to understand that various kinds of market research have become common online and choose to join and share information nevertheless, they also might be surprised to discover that not only is their behavior being observed for purposes beyond Facebook's advertising and marketing services but is being manipulated as well.

In a sense, today’s uncharted territory brings privacy protection full circle. When Samuel Warren and Louis Brandeis wrote their seminal article on the right to privacy, they inferred the right from the common law. One of the main lines of authority they looked to was the law of implied trust—the principle that one entrusted with confidential information owes a duty to the person whose information it is. Trust law separates legal ownership, custody and control of an asset from its benefit and imposes on the trustee duties to protect the interests of the beneficial owner and avoid self-dealing.

These principles resonate anew today, when trust is an essential feature in a digital world. Trust in this broad sense can find a touchstone in the intuitive principles of the common law. Those who collect information need to act as stewards of data; they owe duties to those from whom the information comes to put the interests of the beneficiaries first and use data in ways that benefit the latters' interests and not in ways that can cause them harm.

Trust law developed the benchmark of “the prudent man” (in those days, they were all men), a person imbued with good judgment. Managers of privacy increasingly are called on to exercise this sort of judgment over a broad range of issues.

1 Comment

If you want to comment on this post, you need to login.

  • comment Martin • Aug 25, 2014
    Cam, this is an excellent article.  Good privacy is the absence of inappropriate processing.  The law gives some guidance on what processing is inappropriate.  And surely individual consent doesn't free us from our responsibility to individuals.  I believe we are all coming to grips with the role of balancing of interests in becoming a trustworthy organization.  That said we must develop methodology to make that work.