Automated decision-making conducted by artificial intelligence algorithms that significantly impact people's lives is not a futuristic concept. It's been around for years and is, apparently, here to stay.

The recent case concerning an algorithm utilized by tax authorities in the Netherlands to identify and penalize suspected benefits fraudsters is one example of how serious the consequences of these decisions can be. The notion of ADM in the context of privacy and data protection regulation is getting attention in Canada and the U.S. as well as under the EU General Data Protection Regulation.

ADM under the GDPR

Article 22 of the GDPR — "automated individual decision-making, including profiling" — stipulates the "data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

Recital 71 further stresses the importance of implementing safeguards in instances when this type of personal data processing is concerned.

It is crucial to emphasize that not all types of automated processing are within the scope of Article 22, but only processing qualified as solely automated and having produced certain effects regarding the data subject.

In comparison, Article 20 — the right to data portability — refers to processing that is "carried out by automated means" but does not touch upon the outcomes of processing operations. The Article 29 Data Protection Working Party's "Guidelines on automated individual decision-making and profiling" further clarify such decisions are made "without human involvement." The guidelines also provide some examples of what can amount to "legal effects” in the context of ADM, such as cancellation of a contract.

Automated decision-execution?

A major question is whether any automated decision falls within the scope of the GDPR's Article 22, or only those involving a more heuristic nonlinear approach of the system conducting the processing?

The Future of Privacy Forum's report "Automated Decision-Making Under the GDPR — A Comprehensive Case-Law Analysis" shows both courts and regulators use a variety of criteria to qualify a decision to be considered "solely" automated. Criteria typically includes: the organizational structure of the entity conducting the processing, reporting lines and chain of approval; effective training of staff; internal policies and procedures; the quality of human involvement; and what time in the decision-making process it occurs.

It is interesting to consider, however, as rightly noted under FPF's ADM case overview, that courts and supervisory authorities mostly focus on the final stage of the processing operations and decision-making process, respectively, when deciding on Article 22's applicability, especially regarding "solely automated processing" criteria. 

Such an approach, if simply mirrored and applied to each case outside the court arena, may lead to misinterpretations of Article 22 in practice and turn certain cases to automated decision-making processes, when there is not a decision to be made, but executed.

It may be crucial to add to the list of criteria for qualifying a processing activity as ADM the question of whether the automated processing tool makes the decision or simply executes a decision already made by humans.

Consider a scenario where a website only accepts credit cards issued in the U.S. and declines transaction attempts using European cards. Technically, that refusal could be seen as automated decision-making, as it is carried out by a machine — that is, in a fully automated manner.

But that "decision" is still based on a very simple binary check of a single objective tangible criterion — whether the credit card has been issued in the U.S. The decision to accept only U.S. cards has been already made — what is automated here is the execution.

If we were to further reduce that argument to absurdity, based on a very rigid interpretation of the law, it could even be argued that a webmail app refusing to provide access to an account, where a wrong password is provided, is still an automated decision "which produces legal effects." Surely, this can't be what lawmakers intended when drafting Article 22 of the GDPR.

Another common use case where automated decision-making could potentially be relevant is robotic process automation. The essence of the technology is the emulation of human actions, such as mimicking screen-based steps a user is routinely taking. As the technology's name itself suggests, the intention is for certain processes to be automated, including utilizing personal data.

The robots, however, are only performing a strictly pre-defined set of consecutive flowchart-like steps and any flaw in the instructions would break the process from the point of error onward. Considering that, can it really be argued the results of the (undoubtedly) automated steps taken by the robot could amount to automated decision-making within the meaning of the law?

Certain supervisory authorities, such as the Saxon Data Protection Commissioner in Germany in its activity report for 2019, have determined the automated filtering of job applicants to be ranked and selected for interviews, even with predetermined criteria, qualifies as ADM.

When looking at the details of the particular case analyzed by the Saxon DPA — and more in light of data protection impact assessments than ADM — it becomes clear such a position should not be taken as always applicable by default, but should be analyzed with caution, taking into account how precise the criteria is, what type of data is analyzed, the risk of algorithmic bias or lack of accuracy, and whether such predetermined criteria leave any space for actual decision-making by the tools.

ADM and AI

Nowadays, it seems there is a prevailing sentiment that any processing that relies on tools involving AI technology — apart from being considered high-risk processing — inevitably falls under the scope of Article 22. An example that would support qualifying this as AI-supported automated decision-making within the scope of the GDPR's Article 22 is an automated liveness verification for facial biometric checks.

This technology can replace the human agents in the context of Know Your Customer verification activities, which is its added value. But if the liveness verification fails and the registration of the customer is declined automatically as a result, this will inevitably "produce legal effects" within the meaning of the law.

But in this example, it is not the mere fact that the processing involves AI technology that by default triggers the applicability of Article 22. One could only hope for such a simple, black and white assessment in our field.

Talent acquisition is a particularly interesting field where we can find examples of both, what is recognized here as automated decision-execution and what is undoubtedly a true ADM, while increased use of tools based on AI technology — and its different variations — in the field adds another layer of concern.

An IBM survey of more than 8,500 global IT professionals from late 2023 showed 42% of companies were using AI screening "to improve recruiting and human resources." Another 40% of respondents were considering integrating the technology.

Let's consider an example where any such AI tool is used for supposed ADM, such as to scan applicants' work history and achievements to filter out candidates with less than three years of experience, or with no college degree. In this case the hiring manager already decided they would not consider individuals who do not meet these criteria, and the job opening is clearly advertised as such.

In that example, the tool would execute the manager's decision with precise/straightforward criteria — that do not include any high-risk data and that can hardly be subject to any bias — by filtering candidates instead of human resources team members based on objective conditions set by those individuals.

It seems it would be inaccurate to compare the example of rather simple experience filtering with a more complex tool trained to make actual assessment and provide a "rate" for the candidate.

Another recent example are the AI-based recruitment tools which involve much more sophisticated processing analyzing candidates' body-language, voice tone or general attitude to assess applicants and decide, for instance, whether they would not only fit the role but also the employer's organizational culture.

This leaves plenty of space for issues of accuracy and algorithmic bias, where both solely ADM and significance of legal and other effects concerning data subjects regarding Article 22 undoubtedly apply.

What about algorithmic accountability and input errors?

It is important to note that qualifying a processing activity as automated decision execution does not diminish algorithmic accountability of the companies relying on such processing. A company decides to process personal data with the aim either to support its decision-making or decision-execution process, and it remains responsible for its choice.

In fact, implementing high standards in assessing the risk, regardless of whether the technology used is in-house or provided by a third party, remains crucial.

Another question worth considering involves who is responsible for problematic outcomes based on input data errors. If the input data has been provided with inaccuracies by a human being, could a machine then be blamed for the problematic output?

The answer is likely "no," but then again, this scenario becomes a bit more complicated if the input error is so obvious that a reasonable person would have undoubtedly noticed and rectified it. In any case, there is no clear-cut answer. The devil is always in the details.

Conclusion

Even where ADM is indeed relevant, a specific goal is defined and set by an individual — such as to assess an applicants' creditworthiness. In other words, the decision-making carried out by automated means will still be within the boundaries and for the purposes pre-defined by a human being.

Where such a broad goal has been defined, it is still possible for an outcome to constitute ADM where all relevant data points have been provided to the algorithm — if it’s up to the algorithm to determine which of them should be used and how, that is, to replicate or at least attempt to replicate abstract thinking.

But where the actual logic of the so-called decision-making is fully traceable and follows a flowchart structure, then the robot/tool wouldn't really be making any decisions on its own, it would merely execute them. The decision will have been made in advance. That's automated decision-execution, and as such, it should not be subject to Article 22 of the GDPR.

Kiril Kalev, AIGP, CIPP/C, CIPP/E, CIPP/US, CIPM, CIPT, FIP, is director, privacy at Paysafe. Danica Vranjanin, CIPP/E, CIPM, is privacy and data protection consultant.