TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Why we're concerned about the WP29's guidelines on machine learning Related reading: WP29 releases guidelines on profiling under the GDPR

rss_feed

""

""

In early October, the EU’s Article 29 Data Protection Working Party released guidelines on automated individual decision-making and profiling under the EU General Data Protection Regulation. The guidelines are open to public comment, as detailed by IAPP Westin Fellow Lee Matheson, CIPP/US, until the end of this month. We’ve submitted our concerns, which we outline below.

The bottom line? We believe the WP29’s views on the use of automated decision-making to be both significantly broader in scope than the GDPR’s own provisions, and potentially harmful to many of the organizations that make use of these tools at scale. 

Here’s why: 

First, what seemed to be a simple “right to opt out” of automated decision-making in the GDPR now appears to the WP29 to be a full blown prohibition of automated decision-making in its entirety (with some limited exceptions). As with anything GDPR-related, there are many layers of nuance - and ambiguity - here, but let’s start with the actual text.

Article 22 of the GDPR states: "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

The text appears to be suggesting that data subjects, once the GDPR is in full effect, have a fundamental right to opt out of such decisions. But the Working Party takes things a step further. Turning to the text itself of the draft guidelines, we read that: "[A]s a rule, there is a prohibition on fully automated individual decision-making, including profiling that has a legal or similarly significant effect . . . [but] there are exceptions to the rule."                                         

The guidelines go on to explain that these exceptions will hinge upon the data subject's explicit consent to such automated processing. But they also note that consent as a pre-condition to accessing services is not a lawful basis for either “profiling” or “automated decision-making.” On its face, this seems to eliminate all forms of online advertising in exchange for platform use, such as Facebook, Google’s predictive searches, or any social media platform that provides free use of the platform in exchange for monetizing data gleaned from its use.

While many of these platforms are premised off of, in Shoshana Zuboffa’s words, a type of “surveillance capitalism” that many find extremely troubling, the guidelines move towards prohibiting this as a fundamental business practice. The WP29 relies on the Article 22 language that prohibits automated processes that have legal consequences or “similarly significantly affects” an individual. The WP29’s reading of “similarly significantly” is that it can encompass routine online advertisements, provided it is sufficiently tailored and especially if it targets vulnerable populations. Protecting gambling addicts, as the WP29 notes, is a worthy cause. But these cases seem more like exceptions that need explicit protections rather than the rule of frequent Amazon shoppers, whom we don’t believe should fall into the same category.

The guidelines do not stop at social media or internet services whose revenue requires targeted ads.

The WP29 is quite concerned with two other areas: financial firms and consumer services. For example, the guidelines appear to prohibit automated consumer contract enforcement, such as if a cell phone company disconnects a number because of nonpayment. The guidelines also mention credit scoring seven times, appearing wary of the motives of a financial corporation capriciously denying individuals credit. We recognize that not every consumer services corporation is an upstanding citizen, but the blanket prohibition in the guidelines may prevent both firms and consumers from reaping any benefits of knowing someone’s credit risk or being able to shut off losing accounts. 

Indeed, the entire promise of machine learning models, or what is colloquially referred to as “AI” - and what we believe these guidelines to be addressing in practice - is their ability to automate highly repetitive actions. In a world where a good deal of payment has been digitized, deciding whether to give a consumer goods in exchange for money is perhaps the most repetitive action a business faces. Barring this use seems both intrusive and severe - and potentially devastating to the exciting future of legal technology, which includes automated and self-executing contracts. The UK’s own government is so excited by these possibilities it’s expending real resources on them (full disclosure: We’re involved in this project). Under the WP29 guidance, many of these efforts may be forbidden.

On a broader scale, we believe the guidelines take an unfortunately impoverished view of the ways data can and should be used by ML models. The full list of such benefits are described as “increased efficiencies” and “resource savings” by the WP29, but the guidelines also ignore a good deal of socially desirable benefits of ML models. Take, for instance, the issue of bias. Algorithms only see the data you give them, which opens the door to actually reduce bias by carefully tailoring the factors used to build a model. The guidelines don’t seem to contemplate the possibility for human bias to exceed that of model bias in many of the use cases it examines.

This is not to say that we don’t have many areas of agreement with the WP29, or that the problem of bias in ML isn’t a huge one.

We agree, for example, that consumers should be able to speak to a real, living human in order to understand any algorithmic data processing and to appeal any decisions they think are misinformed or unfair. Algorithms are not perfect, and, as with humans, data can be collected or entered incorrectly, misinterpreted, or an individual could fall into an unexpected wrinkle in a prediction algorithm. There should be an easy, clear, and fast way of remedying such errors. Incredibly important values - such as transparency, fairness, and more - may be at stake. 

We also agree that individuals subject to such screening should be informed - though, given the state of consumer awareness of online privacy and the overall privacy paradox, this may in many instances be hard to achieve. This goal is, however, particularly important when an automated decision or recommendation will have financial consequences. Banks should be clear how they score credit, and consumer services should be clear about how they deal with breach of contract.

Luckily, the guidelines are still in draft form, and we hope for much improvement in the weeks and months ahead.

photo credit: tudedude Nuts and Bolts via photopin (license)

4 Comments

If you want to comment on this post, you need to login.

  • comment Andor Demarteau • Dec 4, 2017
    Why I am concerned with American legal people trying to interpret European law: first of all this inches on the fundamental human right basis of privacy and data protection as a subset set out by the ECHR article 8 as compared to just consumer protection common in the US.
    The way this article has been written clearly illustrates this.
    The authors are also clearly unaware that the wording as listed in article 22 have been present in European data protection law for almost 23 years (data protection directive 95/46/EC) and have been a prohibition in several member states implementation of the directive for almost this long as well.
    
    As for hurting business models based on direct advertising on extremely invasive practices like the companies and model mentioned in this article, then yes it will probably damage them in a high degree of certainty.
    With the privacy paradox or at least the assumption by the authors that internet users are not aware of privacy and their rights, it is the government that is required to protect the citizens against these kind of practices. That is if you view privacy as a fundamental human right, which we in Europe do.
    So I can't but agree with the WP29 it's vision in this respect.
    Will this have far reaching impact to the pervasive and destructive model of paying with your privacy for seemingly "free" services? hell yes it will and that's about time too.
    The rewriting of the e-privacy directive to a regulation will probably put the final nails in the coffin if the GDPR hasn't done so.
  • comment Thomas Bentsen • Dec 5, 2017
    I am in the fortunate situation that I agree with both the article and mr. Demarteau.
    Privacy is considered a basic human right in Europe. Everything GDPR should be seen and interpreted in that light. The lawmakers do exactly that.
    
    The biggest problem with the WP29 guidelines is IMHO that the view is a bit too limited. Another problem is that art. 22 itself probably needs a bit of work as well.
    
    Automated processing is already used in many more settings than credit scoring, loan approval and recommendation of other books to buy.
    The guidelines build on a misconception of how data science works. The definition of scope is unfortunately slightly wrong in that there is no clear distinction between 'model creation' and 'model use'. For a data subject the most invasive part by far would be to have personal data used for automated decision making about the data subject itself. It is not to have personal data used to build the model for the decision making. 
    -A model might be built on millions of records of anonymous data - if the company is smart about it it probably would be - and then it should not be in scope at all. A clear distinction on that would have been nice. -The problem is that legals (both implementers and enforcers) will read this - and do what it says.
    
    Another problem is the definition and the limitation imposed by art. 22: As it is (and not in any way softened up by WP29) it will also cover an 'intelligent pen' that can administer a mix of 3 vitally important medications calculated on realtime measurements of the patient's blood - and a pre-loaded model based on data gathered from 2000 anonymous, volunteer test subjects somewhere in Asia - or a monitoring device that collects data from many sensors on the patient and comes up with an overall 'profile' based on a similar model - a profile that is used directly to control a pacemaker f.ex to avoid long term damage to [something]. These devices will all be reporting back to HQ every day so forget about keeping things out of scope.
    I assume it would be relatively easy to argue that 'health' would be covered by '...similarly significantly affects him or her'. In that case the hospital or doctor will have to either have the expert that created the device available to explain to the patient how the combination of the medication or how the 'profile' has been calculated - or refrain from using the state of the art without 'explicit concent' from the patient. Can 'explicit consent' even be given by a person that does not understand the sometimes very complex models - to a person that does not understand it?
    -It might be impossible even for the person that built the device to explain how it works - except that it works. It was a huge problem for us when we tried to do something intelligent (!) for the banks in relation to Basel II (AFAIR) because they needed something they could explain to authorities and many models used for 'AI' are not explainable - but they do work.
    Health data is 'special category' and that opens its own can of worms - especially if the patient is unconscious.
    If you hammer on something long enough it will fit in any size hole of course so this is probably not a huge problem. But it would have been nice to see that this large group of usecases had been considered by WP29 instead of just 'credit scoring' and the other well known examples.
  • comment Thomas Bentsen • Dec 5, 2017
    Great... No linebreaks  :-D
  • comment Sholem Prasow • Dec 5, 2017
    A few comments:
    
    First of all, I do not agree with the comment that Americans should not comment on European laws. The extraterritorial nature of this law demand as worldwide response. As a Canadian I am happy to see that WP 29 welcomes comments from the rest of the world. I had previously commented extensively on a previous draft guideline and was happy to see that WP 29 acted on those comments in their final release.
    
    I think WP 29 has, as others have said, attempted to disrupt the way people behave in this technologically linked world. For example, it suggests that everyone has a right to be informed and consent to any decision an automated process uses to categorize data subjects. Does that mean that after every search based on certain selection criteria it is required to notify all those rejected because they did not meet the criteria, to be informed that they weren’t and told why? That simply is silly, won’t work, can’t work and will just challenge organizations to go to the courts.
    
    •	Please remember that only the Regulation is part of national law. WP 29 opinions are not.
    
    Another example is the apparent requirement that everyone rejected by an automated process has a right to be in contact with a human who HAS THE AUTHORITY TO REVERSE THAT DECISION. It is one thing to require contact with a human to explain that decision – quite another to expect that human to be able to reverse that decision. 
    
    Finally, the purpose of a Guidance is to answer questions. This particular guidance was written deliberately it seems to raise them. Many questions raised in the Guidance by the authors have remained unanswered in the document.
    
    I would suggest that the WP 29 issue a second Guidance with more answers than questions before finalizing.