TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Why controllers are accountable for automatic decision making under the GDPR Related reading: New rules of the road can sustain US leadership on interoperable digital data flows

rss_feed

""

""

Editor's Note:

This post is a summary of a longer article first published in the Oxford Business Law Blog in its “Law and Autonomous System Series,” April 27, 2018. 

In the academic community, the EU General Data Protection Regulation has triggered a lively debate about whether data subjects have a “right to explanation” of automated decisions made about them. At one end of the spectrum, we see arguments that no such right exists under the GDPR but rather a “limited right to information” only. Conversely, others have argued that this position is based on a very narrow reading of the relevant provisions of the GDPR and that a contextual interpretation shows the GDPR does indeed provide for a right to explanation with respect to automated decisions.

We wholeheartedly agree with the latter interpretation and set out why below. That being said, we think that all sides are missing the broader context.

Accountability requirement

Providing upfront information on automated decision making and the underlying logic to individuals or an explanation to individuals of automated decisions after these are made is one thing; the GDPR’s accountability provision (Articles 5(5) and 22 GDPR) requires that controllers must demonstrate compliance with their material obligations under the GDPR, in particular, requirements of lawfulness, fairness and transparency.

This requires controllers demonstrate that the correlations applied in the algorithm as “decision-rules” are meaningful (e.g., no overreliance on correlations without proven causality) and unbiased (not discriminatory) and are therefore a legitimate justification for the automated decisions about individuals. We recall that transparency to individuals and the right of individuals to, for example, access their data primarily enables individuals to exercise their other rights, such as objecting to profiling (Article 21), requesting erasure or rectification of their profile (Article 17), or contesting automated decisions relating to them (Article 22(3)). The accountability principle requires controllers to subsequently demonstrate compliance with their material GDPR obligations.

The debate about whether the GDPR does or does not provide individuals with a right to an explanation is, therefore, missing the point that, in the end, controllers must be able to show that the correlations applied in the algorithm can legitimately be used as a justification for the automated decisions.

To give a very simplistic example, an explanation for the underlying logic of a decision may be that the relevant individual is from a specific ethnic minority. The individual may then contest this decision as being discriminatory. The controller will subsequently have to demonstrate that using this “rule” for the relevant decision does not constitute unlawful discrimination to continue such processing. If the individual is not satisfied, they will file a complaint and the EU supervisory authorities will investigate.

Algorithmic accountability

To meet their obligations with regard to automated decision making, controllers will need to design, develop and apply their algorithms in a transparent, predictable and verifiable manner. In this sense, “the algorithm did it” is not an acceptable excuse. Nicholas Diakopulos and Sorelle Friedler write, “Algorithmic accountability implies an obligation to report and justify algorithmic decision-making and to mitigate any negative social impacts or potential harms.”

These concerns are not limited to EU laws. The U.S. Federal Trade Commission has issued recommendations that promote similar principles of lawfulness and fairness when applying algorithms to decision making, and U.S. scholars have addressed the issue that automated decision making in the employment context may result in a disparate impact for protected classes, which may violate U.S. anti-discrimination laws. For companies to fend off a disparate‑impact claim, they must show that the disparate impact is justifiable and not unlawful.

In the words of the Norwegian data protection authority in its report on artificial intelligence and privacy:

format_quote"An organization must be able to explain and document, and in some cases, demonstrate, that they process personal data in accordance with the rules (…) If the DPA suspects that the account given by an organisation is wrong or contains erroneous information, it can ask the organisation to verify the details of its routines and assessments (…) This may be necessary when, for example, there is a suspicion that an algorithm is using data that the organisation has no basis for processing, or if there is a suspicion that the algorithm is correlating data that will lead to a discriminatory result."

What is the issue: Information or explanation?

With regard to the right to information, the GDPR, in Articles 13(2)(f) and 14(2)(g), explicitly requires controllers using personal data to make automated decisions to (a.) inform the individuals upfront about the automated decision-making activities; and (b.) provide the individuals with meaningful information about the logic involved, the significance of the decision making, and the envisaged consequences for those individuals.

In its Opinion on Automated Decision-Making and Profiling, the Article 29 Working Party acknowledged that the “growth and complexity of machine-learning can make it challenging to understand how an automated decision-making process or profiling works,” but that, despite this, “the company should find simple ways to tell the individual about the rationale behind, or the criteria relied on in reaching the decision without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm.”

We note that for the controller to be able to explain these criteria, it will have to know what these criteria are in the first place, i.e., the algorithm may not be a “black box.”

For the right to an explanation, Article 22(3) of the GDPR requires a controller to implement suitable safeguards when designing automated decisions, which should include at least the right to obtain human intervention, to express his or her point of view, and to contest the decision. Recital 71 mentions an extra safeguard: the right to an explanation of a specific automated decision.

The authors who claim that Article 22 does not provide the right to an explanation point out that this right is only included in the GDPR’s preamble and that the preamble has no binding force (as confirmed by the Court of Justice of the European Union). However, the CJEU also explains that this does not deprive the preamble of all meaning; it merely prohibits the use of the preamble to interpret a provision in a manner clearly contrary to its wording.

Article 22(3) specifies that the safeguards in the design of automated decisions must at least be included. This wording quite clearly leaves room for requiring other safeguards, such as the right to an explanation of a specific automated decision mentioned in Recital 71 (see also WP29 Opinion at 27 and the Norwegian DPA report at p. 21-22).

Again, in order for the controller to explain the decision in such a way that the individual understands the result, the controller needs to know what the “decision-rules” are in the first place.

Algorithmic accountability requires “white-box” development

Although it is far from set in stone what “white-box” development would require, there are some guidelines to take into account when developing algorithms for automated decision making. By documenting these steps and assessments, the controller will also comply with the requirement to perform a data protection impact assessment:

format_quote“Controllers should carry out frequent assessments on the data sets they process to check for any bias, and develop ways to address any prejudicial elements, including any over-reliance on correlations.

Systems that audit algorithms and regular reviews of the accuracy and relevance of automated decision-making including profiling are other useful measures.

Controllers should introduce appropriate procedures and measures to prevent errors, inaccuracies or discrimination on the basis of special category data. These measures should be used on a cyclical basis; not only at the design stage, but also continuously, as the profiling is applied to individuals. The outcome of such testing should feed back into the system design.”

Guidelines of white-box development
For more details on white box development from Lokke Moerel and Marijn Storm, check out their guidelines here.  

Conclusion: Information, explanation or justification?

Our answer is: all three.

The main underlying rationales of EU data protection laws are preventing information inequality and information injustice. These rationales can only be served if controllers cannot hide behind algorithms for automated individual decision making. Controllers will be accountable for the outcome. The current academic debate on the rights of individuals alone misses the bigger picture, with the risk that companies do the same. 

photo credit: toptenalternatives arrows-box-business-533189 via photopin (license)

Editor's Note:

Interested in what algorithmic accountability means in practice? Join Lokke Moerel and Paul Nemitz at the IAPP Data Protection Congress in Brussels this November. 

Comments

If you want to comment on this post, you need to login.