With EU Artificial Intelligence Act compliance on the horizon, AI developers and deployers have about six months before they need to start worrying about the first wave of compliance requirements. However, one measure for risk management should garner special attention as AI governance programs ramp up.

Human oversight of AI systems may present as an easy task on its face. It is required as part of the management of high-risk systems that could affect a person's "fundamental rights," including AI used in medical devices, vehicles, law enforcement activities and emotion recognition decisions, among others.

The starting point

Human oversight was recognized as a critical element of AI governance by the Organisation for Economic Co-operation and Development in its landmark 2019 AI Principles document, which was recently updated this year. The document previously noted AI's capability to infringe upon a person's human rights in addition to potentially spreading misinformation. Human oversight, in turn, could prevent problems "arising from uses outside of intended purpose, intentional misuse, or unintentional misuse in a manner appropriate to the context and consistent with the state of the art."

Oversight was important enough to warrant its own section in the AI Act. Article 14 requires high-risk systems to be designed so "natural persons can oversee their functioning, ensure that they are used as intended and that in their impacts are addressed over the system’s lifecycle."

Article 14 also stresses developers of high-risk systems should include instructions on how a given AI deployment works along with mechanisms to help the person assigned to oversight decide when to intervene when a system generates a negative consequence or does not act as intended when appropriate. How much a person is determined to have followed those instructions will matter if the technology causes harm and compliance is questioned.

But human oversight can be tricky, and it is not meant to be a catchall when it comes to risk management, AI governance stakeholders said. It relies on the person doing the oversight to have enough “AI literacy, training and authority” to understand when something might be off with the system – and the wisdom to know when an algorithmic decision needs questioning.

The act has specific requirements for certain AI use cases and how much human oversight they should have. For instance, decisions made with biometric identification systems should be verified by two "natural persons" before the deployer takes action.

In addition to ensuring those charged with human oversight have enough training and authority, the AI notes deployers need to give them the proper support to do their jobs correctly. It stresses the work of human oversight should not impede on other tasks required to run the AI systems correctly. Documentation of human oversight is also required within the act, as is assessing how it could mitigate potential risks.

Interpretation looms large

Approaches to oversight will vary based on the type of AI system while respective views on the meaning of oversight will play a role as well.

ZEW  Leibniz Centre for European Economic Research researcher Johannes Walter said human oversight at its most basic means a person has the ability to intervene or overrule an algorithm's decision, including adjusting the risk settings depending on the situation.

"Too many people perceive human oversight as a panacea. They go, 'If there's a human who looks over it, then I don't have to worry about AI anymore,'" Walter said. "When in reality, that of course is absolutely not true, and it just opens a whole new box of problems."

There are pitfalls to the general practice of oversight. Walter said humans do not always have the best judgment in the evaluation of AI decisions, in part because how models come to a result is not always clear.

The act does touch on automation bias, noting deployers need to use AI in such a way that those engaged in oversight need to remain aware of its possibility especially regarding high-risk systems and are able to stop or override its decision if needed.

Cornelia Kutterer, an AI senior research fellow at the Université Grenoble Alpes and IAPP AI Governance Center board member, said those thinking about how human oversight fits into their governance program can take cues from the EU General Data Protection Regulation. The law requires human intervention for automated-decision making technology already, and also requires a "meaningful" level of involvement.

The correlation suggests those used to overseeing data protection should already have some skill in this area.

But Kutterer also said all the training and second-guessing a governance person should do will be challenging if a technology's decision-making process is not understandable. It is a problem facing many high-end models, but the AI Act requirements may push developers to research the issue further to comply.

"Accessible interpretation methods could empower individuals and communities to investigate and understand the behavior of these models. This, in turn, could enable more meaningful consent, facilitate improvements, and provide a basis for contesting the use of foundation models when necessary," Kutterer said.

"On a more practical level, transparency obligations in the AI Act for GPAI models should help to enhance the hygiene of the ecosystem," she added.

Oversight troubleshooting

The oversight by a human comes with the potential for human error. Walter said a chief issue is a human's over-reliance on a recommendation versus personal judgment. On the other hand, humans might not trust a system despite a logical decision.

Other issues that may appear, according to Walter, relate to proper personnel training and the impacts of it has on the ability to spot and fix errors within a system or its decisions.

How to fix those problems is not always straightforward. Training a decision maker is an obvious remedy, but not one that is always foolproof, as their prior experience may not mesh with evaluating AI results, he said.

"For instance, when it comes to judges, they of course have a lot of training in their specific judicial experience," Walter said. "But what are the right treatments to improve their (judgement) in regards to AI advice? There’s not much literature on that yet."

Jesslyn Dymond, CIPP/C, director of data ethics at TELUS, said keeping "both the human in the loop and the humanity in the loop" has been a crucial part of her company's AI strategy in order to build trust. TELUS has put an emphasis on AI literacy so that their human monitoring efforts are not just an item to check off the list.

"Accountability stems from people understanding both the strengths and capabilities of an AI system, but also its limitations, the constraints and potential risks that should be considered when it is being used," she said.

Dymond said there is still work to be done ahead of the AI Act's first enforcement deadline six months after it enters into force to be sure TELUS has the resources, training and information it needs to comply. The company's AI program is staffed with both employees who focus on how data is used and those who pay attention to innovation and mitigating risk.

But Dymond said the work also holds space for workers whose roles are adjacent to AI, such as privacy professionals.

"The model of accountability and understanding data flows, how systems work and being that translator between the technology and the users is an absolutely critical skill set long held in the privacy profession," she said.

Caitlin Andrews is a staff writer for the IAPP.