TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Utilizing PIAs to limit institutional discrimination and bias Related reading: IAPP statement on racial injustice

rss_feed

""

I view privacy as sitting at the convergence of what is legal, what is possible and what is ethical regarding the composition of what makes a person unique. While there are various forms of privacy, I’m going to focus on information privacy because it is perhaps the easiest to conceptualize in this situation.

As a privacy community, we seem to have a firm grasp on what is legally permissible when using a person’s data. Likewise, if you have ever worked with an IT department, they are rightly quick to tell you when something is not possible or probable given the current state of technology. 

The question of ethics, on the other hand, seems to slip by the wayside rather often. In other words, when something is legal, possible and profitable, privacy in the U.S. is known to take the backseat. If the past weeks have shown us anything in terms of our jobs as privacy professionals and the rights and liberties we help to protect on a daily basis, we need only to look at the news to see the multitudes of privacy issues regarding the reality of profiling, targeting and, at times brutality, inequality and injustice by way of institutionalized discrimination.  

One of the main tenets of ethics in privacy is defined by harm to the individual. This concept is well ingrained in the field of privacy. As a profession, we know how to imbed this concept into our privacy impact assessments. We know how to look at whether the use of data causes physical, financial, or reputational harm or embarrassment to the individual and to our company. 

However, should we look only at harm to the individual, or should we also look at harm to a class of individuals based on the personal data that is used?  

I would argue our ethical duty is both. While it is required to ensure that an individual person is not harmed, we should also be ensuring that classes of individuals are not harmed, the result of which is that an individual person is ultimately harmed, in a systemic way, through their identification of being a member of a specific class. If we are looking for equality and fairness, it is imperative that we seek to include institutionalized discrimination and bias into our understanding of privacy risk.  

What is institutionalized discrimination?

Institutionalized or systemic, discrimination “involves patterns, practices, or policies where the alleged discrimination has a broad impact on an industry, profession, company, or geographic area.”  

Two of the most prevalent forms of institutionalized discrimination are racism and sexism. Moreover, according to the Oxford Dictionary of Sociology, studies about societal behavior have shown that “discrimination against some groups in society can result from the majority simply adhering unthinkingly to the existing organizational and institutional rules or social norms. Prejudice, stereotyping, and covert or overt hostility need not be factors in the exploitation of one group by another, or in the unfair distribution of rewards.”  

For example, unconscious bias might allow a person to unknowingly act in a specific way without intent; this is currently the debate around facial recognition programs that identify different races and genders with varying degrees of accuracy, the highest point of accuracy being white males who have traditionally been the programmers of such products.  

In recent weeks, this exact problem has caused significant reputational issues for the companies offering such products, which has led to public distrust. Medical artificial intelligence technologies favoring a specific gender, race or culture type, allowing for better medical advancements and outcomes based on the classification of a person through the use of their personal data is yet another example.

What is a PIA?

People put trust in organizations and ultimately in privacy professionals to handle their data both legally and ethically. One industry standard privacy professionals use to ensure both harm and risk are limited is the PIA.

A PIA is a documented analysis tool to identify and reduce privacy risks associated with processing data related to a person. It is an effective tool for identifying privacy issues, as well as providing remedial or mitigating actions and mechanisms to ensure that a project, product or service does not violate a person’s rights, company policy and procedures, societal expectations, and the ethical use of data. 

PIAs influence how data is processed and used and, ultimately, the development of services or products. PIAs reduce costs and reputation damage. Furthermore, the PIA process gives documented evidence of identified risks, why decisions were made, and can be used as a tool to change past practices and learn from past mistakes should the need arise. 

Why identification of institutional discrimination should be part of the PIA process

If the ultimate goal is to reduce both harm and risk to an individual by identifying overall societal harm to a class — which ultimately impacts the individual who is part of an identified class — privacy professionals have perhaps the best grasp on data use in relation to risk and harm because we live and breathe these concepts in our daily work. 

PIAs already identify various logistics of harm and risk, and it is not a stretch to understand or evaluate classes of individuals who might be negatively impacted on a broad scale by the implementation of a specific project, product or service.  

For example, as privacy professionals, we already think about how data is collected, used, stored and shared, as well as what harms and risks may follow from such actions. Our job is to identify ethical, compliance, and reputational issues and advise on remedial and mitigating mechanisms. We are well-positioned within our organizations to work with various stakeholders to create the best outcomes for projects, products and services, as well as advise on risk and harms resulting from specific data processing.  

For example, utilizing a person’s face (their personal data) in a facial recognition program in which faces are identified at varying rates of accuracy, based on gender and race, is not a hard concept to identify as discriminatory. This could ultimately result in the loss of life or liberty in an unfair manner. 

It is relatively easy to think of the physical and financial implications of targeting a person in light of the area in which they live, their race or gender, rather than their qualifications for credit, housing and job placement, but this was the cause for investigation by the U.S. Department of Housing and Urban Development and Equal Employment Opportunity Commission with regard to targeted advertising by large data companies in March 2019. 

It is not difficult to understand the reputational and possible emotional harm of labeling people as a marginalized class, such as disabled or LGBTQ, and then penalizing them for being part of such a class (i.e., limiting their access to the social media platform) based on the platform's classification of the individual as part of the identified class. Yet, this happened in January 2019. 

It is also relatively effortless to understand that allowing targeted advertising based on key terms of hate speech is likely to bolster ideals of hate toward already-marginalized populations, but this happened earlier this year. 

While these are a few examples that quickly come to mind because they became headlines for their respective companies, others are less obvious or more discrete while still creating serious ethical and privacy issues revolving around institutionalized discrimination based on a person’s data.  

These situations could have been identified and prevented through the use of a PIA. Undoubtedly, identification would not only have saved money and reputational damage for the companies involved, but it would have limited institutionalized discrimination, harm to the groups involved as a whole and, ultimately, harm to the individuals. 

While we are in the golden age of data, we must take care of how we utilize the data entrusted to us. Though not all institutionalized discrimination is likely to be caught through a PIA, it is a significant start to solve a longstanding problem. If the goal is to aggregate, analyze and ultimately target the individual while identifying and mitigating institutionalized discrimination, incorporating analysis of institutionalized discrimination and class harm into the PIA is perhaps the easiest place to start.

We as privacy professionals are uniquely equipped to handle such issues.  

This article was prepared by Amanda Ruff in her personal capacity. The opinions and views expressed in this article are the author's own and do not necessarily reflect the views of her employer.

Photo by Markus Spiske on Unsplash


Approved
CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.