In early October, the EU’s Article 29 Data Protection Working Party released guidelines on automated individual decision-making and profiling under the EU General Data Protection Regulation. The guidelines are open to public comment, as detailed by IAPP Westin Fellow Lee Matheson, CIPP/US, until the end of this month. We’ve submitted our concerns, which we outline below.
The bottom line? We believe the WP29’s views on the use of automated decision-making to be both significantly broader in scope than the GDPR’s own provisions, and potentially harmful to many of the organizations that make use of these tools at scale.
Here’s why:
First, what seemed to be a simple “right to opt out” of automated decision-making in the GDPR now appears to the WP29 to be a full blown prohibition of automated decision-making in its entirety (with some limited exceptions). As with anything GDPR-related, there are many layers of nuance - and ambiguity - here, but let’s start with the actual text.
Article 22 of the GDPR states: "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."
The text appears to be suggesting that data subjects, once the GDPR is in full effect, have a fundamental right to opt out of such decisions. But the Working Party takes things a step further. Turning to the text itself of the draft guidelines, we read that: "[A]s a rule, there is a prohibition on fully automated individual decision-making, including profiling that has a legal or similarly significant effect . . . [but] there are exceptions to the rule."
The guidelines go on to explain that these exceptions will hinge upon the data subject's explicit consent to such automated processing. But they also note that consent as a pre-condition to accessing services is not a lawful basis for either “profiling” or “automated decision-making.” On its face, this seems to eliminate all forms of online advertising in exchange for platform use, such as Facebook, Google’s predictive searches, or any social media platform that provides free use of the platform in exchange for monetizing data gleaned from its use.
While many of these platforms are premised off of, in Shoshana Zuboffa’s words, a type of “surveillance capitalism” that many find extremely troubling, the guidelines move towards prohibiting this as a fundamental business practice. The WP29 relies on the Article 22 language that prohibits automated processes that have legal consequences or “similarly significantly affects” an individual. The WP29’s reading of “similarly significantly” is that it can encompass routine online advertisements, provided it is sufficiently tailored and especially if it targets vulnerable populations. Protecting gambling addicts, as the WP29 notes, is a worthy cause. But these cases seem more like exceptions that need explicit protections rather than the rule of frequent Amazon shoppers, whom we don’t believe should fall into the same category.
The guidelines do not stop at social media or internet services whose revenue requires targeted ads.
The WP29 is quite concerned with two other areas: financial firms and consumer services. For example, the guidelines appear to prohibit automated consumer contract enforcement, such as if a cell phone company disconnects a number because of nonpayment. The guidelines also mention credit scoring seven times, appearing wary of the motives of a financial corporation capriciously denying individuals credit. We recognize that not every consumer services corporation is an upstanding citizen, but the blanket prohibition in the guidelines may prevent both firms and consumers from reaping any benefits of knowing someone’s credit risk or being able to shut off losing accounts.
Indeed, the entire promise of machine learning models, or what is colloquially referred to as “AI” - and what we believe these guidelines to be addressing in practice - is their ability to automate highly repetitive actions. In a world where a good deal of payment has been digitized, deciding whether to give a consumer goods in exchange for money is perhaps the most repetitive action a business faces. Barring this use seems both intrusive and severe - and potentially devastating to the exciting future of legal technology, which includes automated and self-executing contracts. The UK’s own government is so excited by these possibilities it’s expending real resources on them (full disclosure: We’re involved in this project). Under the WP29 guidance, many of these efforts may be forbidden.
On a broader scale, we believe the guidelines take an unfortunately impoverished view of the ways data can and should be used by ML models. The full list of such benefits are described as “increased efficiencies” and “resource savings” by the WP29, but the guidelines also ignore a good deal of socially desirable benefits of ML models. Take, for instance, the issue of bias. Algorithms only see the data you give them, which opens the door to actually reduce bias by carefully tailoring the factors used to build a model. The guidelines don’t seem to contemplate the possibility for human bias to exceed that of model bias in many of the use cases it examines.
This is not to say that we don’t have many areas of agreement with the WP29, or that the problem of bias in ML isn’t a huge one.
We agree, for example, that consumers should be able to speak to a real, living human in order to understand any algorithmic data processing and to appeal any decisions they think are misinformed or unfair. Algorithms are not perfect, and, as with humans, data can be collected or entered incorrectly, misinterpreted, or an individual could fall into an unexpected wrinkle in a prediction algorithm. There should be an easy, clear, and fast way of remedying such errors. Incredibly important values - such as transparency, fairness, and more - may be at stake.
We also agree that individuals subject to such screening should be informed - though, given the state of consumer awareness of online privacy and the overall privacy paradox, this may in many instances be hard to achieve. This goal is, however, particularly important when an automated decision or recommendation will have financial consequences. Banks should be clear how they score credit, and consumer services should be clear about how they deal with breach of contract.
Luckily, the guidelines are still in draft form, and we hope for much improvement in the weeks and months ahead.
photo credit: tudedude Nuts and Bolts via photopin (license)