I recently took part in a meeting at the Atlanta IAPP KnowledgeNet on the ethics of privacy. Led by Contrast Security Data Privacy Director Suzette Corley and OneTrust CEO Kabir Barday, CIPP/E, CIPP/US, CIPM, CIPT, FIP, the meeting wasn’t so much a lecture but an exercise. Small groups of privacy professionals discussed the ethics of three scenarios. For me, the actual substance of the exercise wasn’t as important as the insight I gained into the privacy professionals’ thought processes.

My biggest takeaway was the overreliance on gut reaction or intuition, rather than an appeal to a concrete ethical framework.

The table at which I sat disagreed on whether there was an ethical issue presented in the first scenario (a data-sharing agreement between two companies for joint marketing opportunities). Those suggesting it presented no ethical issue responded with something akin to “it doesn’t feel like there is an ethical or moral issue here.” When pressed for why they thought the scenario was unproblematic; they pointed only to dishonesty and, maybe, fiduciary duty as guiding moral tenets. The scenario suggested no deceit or heightened duty, such as one might find in the legal profession; therefore, in their view, there was no ethical issue to untangle.

In his initial introduction to the exercise, Barday mentioned a client who was taking a harms-based approach to privacy impact assessments in their company and suggested this might be a way to tackle the scenarios. Consequentialist ethical theories are those that base the morality of one’s actions on the consequences of those actions. This approach remains popular with attorneys in the U.S. because it aligns with the law’s demand for "harm" or damages in litigation around privacy. I’ve previously voiced my objection to the consequentialist approach in privacy impact risk assessment because it incentivizes violations of social norms if one can mitigate the tangible harms.

Contrasted to consequentialism, deontological ethics focuses on determining right or wrong based on whether an action adheres to a set of rules. In privacy, there are many potential rule books, almost all based on social norms, including:

  • Woody Hartzog’s privacy pillars: obscurity, trust and autonomy.
  • Alan Westin’s states of privacy: reserve, anonymity, solitude and intimacy.
  • Ryan Calo’s objective and subjective harms.
  • William Prosser’s privacy torts: false light, intrusion upon seclusion, public disclosure and appropriation.
  • Dan Solove’s taxonomy of privacy: 16 harms grouped by information processing, information dissemination, collection and invasion.
  • Helen Nissenbaum’s contextual integrity.

In my current work assisting a client with aligning their privacy program to the new NIST Privacy Framework, I consolidated the major privacy value systems identified above into five privacy values for the organization. These privacy values were also informed by the organization’s two broad business objectives, which helped frame and prioritize the importance of the underlying rules. These will then be used to determine a "target profile" of activities under the NIST Privacy Framework.

The target profile supports the underlying value system. By way of example, Prosser’s false light and Solove’s distortion and increased accessibility (both forms of information dissemination) were combined with the business objectives to create a privacy value of fair portrayal. The privacy framework subcategory of CT.DP-P6 (data processing is limited to what is relevant and necessary for a system/product/service to meet mission/business objectives) was then chosen as one of the activities to support the value fair portrayal.

Broadly, fair portrayal extols the importance of accuracy, relevance and timeliness in representations. Privacy professionals may recognize this as the Organisation for Economic Co-operation and Development principle of "data quality." But notice the distinction: "Data quality" focuses on the data and is value-neutral. A business could desire data quality out of pure self-interest. "Fair portrayal" focuses on the person and how we feel people should be treated when representing them to others.

Word choice matters.

The point of this post is to suggest that rather than trying to argue the ethics of activities in a vacuum, using gut reactions and a response of “I think it is (right or wrong),” pick an ethical framework prior to performing an analysis.

By picking a framework with your organization upfront, you can articulate your analysis of any particular scenario based on that framework. Then, the discussion becomes less about what is ethical or unethical to whether the scenario meets the threshold under your agreed-upon values. The articulation and alignment to a predetermined framework are key for working with others lest you descend into irreconcilable disagreement based on conflicting gut reactions. 

I also want to caution readers from using fair information practices as an ethical framework. This is a frequent mischaracterization and misuse. The FIPs are a set of practices or common actions meant to achieve fairness — fairness here being the ethical goalpost.

While the FIPs may represent rules of behavior, they are value-neutral. Transparency can be used to coerce, manipulate and demolish autonomy by instilling fear and driving obsequence. Choice can be overwhelming, giving one an illusory sense of control. Access may only heighten one’s despair at the seeming omnipotence of the organization, leaving one with a feeling of helplessness and desire for reclusion.

If the primary threat is the organization itself, security against outside threat actors serves the interest of the organization more than the interest of the individual. Finally, accountability doesn’t promote privacy unless the underlying actions for which one is held to account are privacy positive.

While the FIPs or OECD Principles can support robust privacy, rote application will not. One must have a value system underlying it. 

Photo by Dayne Topkin on Unsplash