TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

""

I recently took part in a meeting at the Atlanta IAPP KnowledgeNet on the ethics of privacy. Led by Contrast Security Data Privacy Director Suzette Corley and OneTrust CEO Kabir Barday, CIPP/E, CIPP/US, CIPM, CIPT, FIP, the meeting wasn’t so much a lecture but an exercise. Small groups of privacy professionals discussed the ethics of three scenarios. For me, the actual substance of the exercise wasn’t as important as the insight I gained into the privacy professionals’ thought processes.

My biggest takeaway was the overreliance on gut reaction or intuition, rather than an appeal to a concrete ethical framework.

The table at which I sat disagreed on whether there was an ethical issue presented in the first scenario (a data-sharing agreement between two companies for joint marketing opportunities). Those suggesting it presented no ethical issue responded with something akin to “it doesn’t feel like there is an ethical or moral issue here.” When pressed for why they thought the scenario was unproblematic; they pointed only to dishonesty and, maybe, fiduciary duty as guiding moral tenets. The scenario suggested no deceit or heightened duty, such as one might find in the legal profession; therefore, in their view, there was no ethical issue to untangle.

In his initial introduction to the exercise, Barday mentioned a client who was taking a harms-based approach to privacy impact assessments in their company and suggested this might be a way to tackle the scenarios. Consequentialist ethical theories are those that base the morality of one’s actions on the consequences of those actions. This approach remains popular with attorneys in the U.S. because it aligns with the law’s demand for "harm" or damages in litigation around privacy. I’ve previously voiced my objection to the consequentialist approach in privacy impact risk assessment because it incentivizes violations of social norms if one can mitigate the tangible harms.

Contrasted to consequentialism, deontological ethics focuses on determining right or wrong based on whether an action adheres to a set of rules. In privacy, there are many potential rule books, almost all based on social norms, including:

  • Woody Hartzog’s privacy pillars: obscurity, trust and autonomy.
  • Alan Westin’s states of privacy: reserve, anonymity, solitude and intimacy.
  • Ryan Calo’s objective and subjective harms.
  • William Prosser’s privacy torts: false light, intrusion upon seclusion, public disclosure and appropriation.
  • Dan Solove’s taxonomy of privacy: 16 harms grouped by information processing, information dissemination, collection and invasion.
  • Helen Nissenbaum’s contextual integrity.

In my current work assisting a client with aligning their privacy program to the new NIST Privacy Framework, I consolidated the major privacy value systems identified above into five privacy values for the organization. These privacy values were also informed by the organization’s two broad business objectives, which helped frame and prioritize the importance of the underlying rules. These will then be used to determine a "target profile" of activities under the NIST Privacy Framework.

The target profile supports the underlying value system. By way of example, Prosser’s false light and Solove’s distortion and increased accessibility (both forms of information dissemination) were combined with the business objectives to create a privacy value of fair portrayal. The privacy framework subcategory of CT.DP-P6 (data processing is limited to what is relevant and necessary for a system/product/service to meet mission/business objectives) was then chosen as one of the activities to support the value fair portrayal.

Broadly, fair portrayal extols the importance of accuracy, relevance and timeliness in representations. Privacy professionals may recognize this as the Organisation for Economic Co-operation and Development principle of "data quality." But notice the distinction: "Data quality" focuses on the data and is value-neutral. A business could desire data quality out of pure self-interest. "Fair portrayal" focuses on the person and how we feel people should be treated when representing them to others.

Word choice matters.

The point of this post is to suggest that rather than trying to argue the ethics of activities in a vacuum, using gut reactions and a response of “I think it is (right or wrong),” pick an ethical framework prior to performing an analysis.

By picking a framework with your organization upfront, you can articulate your analysis of any particular scenario based on that framework. Then, the discussion becomes less about what is ethical or unethical to whether the scenario meets the threshold under your agreed-upon values. The articulation and alignment to a predetermined framework are key for working with others lest you descend into irreconcilable disagreement based on conflicting gut reactions. 

I also want to caution readers from using fair information practices as an ethical framework. This is a frequent mischaracterization and misuse. The FIPs are a set of practices or common actions meant to achieve fairness — fairness here being the ethical goalpost.

While the FIPs may represent rules of behavior, they are value-neutral. Transparency can be used to coerce, manipulate and demolish autonomy by instilling fear and driving obsequence. Choice can be overwhelming, giving one an illusory sense of control. Access may only heighten one’s despair at the seeming omnipotence of the organization, leaving one with a feeling of helplessness and desire for reclusion.

If the primary threat is the organization itself, security against outside threat actors serves the interest of the organization more than the interest of the individual. Finally, accountability doesn’t promote privacy unless the underlying actions for which one is held to account are privacy positive.

While the FIPs or OECD Principles can support robust privacy, rote application will not. One must have a value system underlying it. 

Photo by Dayne Topkin on Unsplash 

R. Jason Cronk is the author of “Strategic Privacy by Design,” a guide to implementing privacy by design.


Approved
CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT
Credits: 1

Submit for CPEs

6 Comments

If you want to comment on this post, you need to login.

  • comment Jim Miles • Jan 8, 2020
    Very valuable and enlightening perspective (although I did have to look up the definition of deontological :-)).  Would you be willing to describe the other 4 privacy values you derived?
  • comment Robert Doherty • Jan 8, 2020
    That was a very thought provoking opinion piece Jason and the historical summaries of the views of the privacy gurus of years past, while complicated, did effectively emphasize the point you making.  In my practice, I don't frame the issue in ethical terms, but in effect I work with clients to develop a set of principles that govern where they want to be at the end of the PIA and any mitigation strategy.  This as you imply is not the same as relying on the OECD Fair Information Practices.  A good example would be the concept of community privacy among indigenous people where a community provides the basis for medical research.  Despite the fact that individual data can be anonymized in such studies, there  could  there might be the potential for bad  experiences for racial profiling  where a particular community might be identified with respect to the substance of a health study.  I am not saying this happens, merely that appropriate principles developed in the beginning would steer the ownership, language, and distribution of any results.
    
    Bob Doherty
  • comment R. Jason Cronk • Jan 8, 2020
    Jim, thanks for the comment. The others are much more proprietary as they take in business objectives of the organization as well. They may be published at some future date but I would hold my breath. 
    
    Robert, you bring up an excellent point that societal risks are something to factor in as well. In the privacy risk framework I use (based on Factors Analysis of Information Risk aka FAIR), I do consider the secondary consequences, including individual, organizational and societal consequences. In that respect I’m not ignoring the consequences but I’m just framing it in terms of consequences stemming from some violation of social norm.
  • comment Emma Butler • Jan 13, 2020
    I agree completely that a framework is needed. Not only to avoid gut reactions but because gut reactions are based on the worldview and experiences of the person expressing them. There are real risks of overlooking risk, harms and unfairness when you rely on individuals' personal views to inform whether something is ethical or not. Developing a framework needs a diverse range of views and experiences but, once done, should enable a more consistent approach and decision-making.
  • comment Derek Eng • Jan 13, 2020
    Thanks for the great piece.   In my line of work we often run into situations or scenarios that are technically legal,  but don't "feel" right.  In Canada, health care is publicly funded, so how patient information is treated ought to go above the bare minimum legal requirements.  Having an ethical framework helps when trying to articulate/justify why a course of action should be considered appropriate or not.
  • comment Tonia Schneider, MSI CRM, CIPM, IGP • Jan 14, 2020
    The ethics discussion is useful and extremely necessary.  I do believe that companies should have frameworks but they may need to be different, based on who's data is being used.  Because the root ideas of "privacy" may differ in different nations, so might the expectations of the cultural norm. For example, " data privacy" is a human right in Europe, but treated more as property right in the United States. Both are valid concepts, due to the history of both places (i.e., The United States has had some privacy level of privacy rights embedded in it's Constitution since the 18th Century, Europe has had some issues with Governments overextending their reach (1940's Germany)).  Therefore, what  is and isn't ethical may be different in Topeka, KS vs in Girona, Spain.  The idea of and individual selling their personal information in Europe is mostly irregular and a company offering money to give up a human right to (for example) withdraw consent, would most definitely be considered unethical. One cannot sell a human right. However, selling the use of your information to a company and perhaps losing the ability to redact your consent in the United States may be viewed as a conscious decision to sell one's personal property. Is either one more ethical than the other? That depends on the cultural norm. If you are a company that is working in more than one nation,  you may need more than one framework, which of course, complicates matters exponentially.