TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | One word can make all the difference in the world Related reading: A regulatory roadmap to AI and privacy

rss_feed

""

In 2014, Brisha Borden was arrested and charged with burglary and petty theft. She and a friend rode down the street on a bike and scooter that belonged to someone else, before a woman appeared and yelled at them. Though the two immediately dropped the bike and scooter, a neighbor had already called the police, and Borden and her friend were arrested. A year earlier, Vernon Prater was arrested and charged with shoplifting from a Home Depot. Prater had already served five years in jail for armed robbery. Though their cases were vastly different, and it was clear who the more “seasoned” criminal was, when the two arrived in jail, a computer algorithm, used to determine the likelihood of recidivism, assigned Borden (who is Black) a recidivism score of 8 out of 10 and Prater (who is white) a 3.

One of the most widely used — and most controversial — recidivism algorithms used in the U.S. judicial system is called the Correctional Offender Management Profiling for Alternative Sanctions. An analysis of the code carried out by ProPublica in 2016 showed serious concerns regarding the efficacy of the algorithm when comparing different factors, such as race, age and socioeconomic background.

The concerns about artificial intelligence algorithms are not limited to the criminal justice system. They are being voiced by ethicists and practitioners across all verticals, as AI becomes more and more pervasive in almost every area of our everyday lives. Though this technology isn’t new, its capabilities have certainly grown exponentially in the last few years, making its impact on our daily lives ubiquitous.

Currently, there is no comprehensive framework dealing with the regulation of AI. One of the frameworks that have attempted to regulate the use of AI algorithms is privacy legislation. The EU General Data Protection Regulation, for example, has a specific article that deals with what it calls “automated decision making.” Article 22 of the GDPR says, “The data subject shall have the right not to be subject to a decision based solely on automated processing.”

The keyword in the article is “solely.” If a decision that has a significant impact on the data subject is reached “solely” on the processing of data by an algorithm then the provision applies, and certain protections and safeguards are granted to the data subject. But, if the algorithm merely aids the decision-making process of an individual, Article 22, along with its legal protection, does not apply. As the Article 29 Working Party’s (now the European Data Protection Board) guidance on automated decision making explains, “An automated process produces what is in effect a recommendation concerning a data subject. If a human being reviews and takes account of other factors in making the final decision, that decision would not be ‘based solely’ on automated processing.”

The underlying assumption made by the GDPR is that the “cure” for imperfections arising from flawed algorithmic decision making is more human involvement. Once a human being is introduced into the equation, it is assumed, the concerns about the impacts of harms made by way automated decisions fall away.

This assumption is flawed for two reasons. First, the very reason biases and skewed results exist in computer algorithms is due to human-written code. Humans are those who impart their prejudices and biases, translated into the bits and bytes of code they write. Second, because of the sophistication of these systems, even the coders who have built them sometimes do not understand exactly how they work. How can a judge be expected to mitigate biases in a system that the inventors of the system itself don’t necessarily understand? If you were a judge looking at a 3 versus 8 recidivism score spat out by an algorithm you do not understand, who would you be more likely to send back to send to jail for a longer sentence? How would you even go about figuring out whether the algorithm has got it wrong?

So what can be done to rectify the problem? We must discard the notion that more human involvement “cures” algorithmic biases. We must broaden the scope of the regulation to apply not only to decision-making algorithms but also to decision-aiding algorithms. The fact that the recommendation was made by an algorithm should trigger the legislator protections, regardless of whether a human makes the ultimate decision or not.

We already have examples of upcoming laws that implement this exact approach. The California Privacy Rights Act, voted into law by California residents Nov. 4, 2020, defines “profiling” as “any form of automated processing of personal information … ” The CPRA does not distinguish between whether the actual decision was made by the algorithm or not. The mere fact that the personal information was processed by automated means to produce a recommendation on how to act warrants additional protections and safeguards, whether the human or the machine makes the ultimate decision.

Perhaps the bill that most exemplifies and correctly handles this crucial distinction is the newly introduced Canadian bill C-11. The bill defines an “automated decision system” as “any technology that assists or replaces the judgment of human decision-makers using techniques such as rules-based systems, regression analysis, predictive analytics, machine learning, deep learning and neural nets.‍”

In summary, the assumption that the appropriate mitigation for any harm created by algorithmic bias is more human involvement has proven false. Not only does human involvement not help, but in some cases, it may even make the situation worse. As AI algorithms become more and more sophisticated, and humans become less and less able to understand how they work, we should be expanding the regulation to apply to decision-aiding systems and not simply limit it to apply only to decision-making ones.

Photo by Michael Dziedzic on Unsplash


Approved
CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.