TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Well-intentioned debiasing methods that hamper AI governance tools Related reading: TikTok sues US government over divestment law

rss_feed

""

The expansion of artificial intelligence into sectors ranging from legal defense to banking to government services and beyond has put emphasis on organizations developing standards for how such technology is used.

Some governments already have frameworks in place or are working toward developing them. But a recent report from nongovernment organization World Privacy Forum examining AI governance tools used around the globe found organizations are not always taking the time to adopt guidelines suited to their country's regulatory landscape, or their organization's specific use of the technology.

Researchers closely examined 18 AI governance tools used in Australia, Canada, Dubai, Ghana, Singapore, the U.K. and the U.S. The resources were defined as "socio-technical tools for mapping, measuring, or managing AI systems and their risks in a manner that operationalizes or implements trustworthy AI."

More than one-third of those tools had critical flaws, researchers found. Some used governance tools not suited to their organization's particular use of AI, leading to a mismatch in applications. Others were not thoroughly vetted before being introduced. Many did not document what the tools were and how they worked. 

That's a problem, according to World Privacy Forum's Pam Dixon and Kate Kaye, because the majority of those tools are meant to help avoid bias and discrimination in the algorithms that help AI make decisions, as well as making those tools easily understandable. The contrast between intent and execution emphasizes the dangers of rushing to govern AI without tailoring that framework to an organization's specific needs.

"If you have faulty measures that you're using to measure AI systems, then you're going to have very poor corrections of those systems," said Dixon, the WPF's founder and executive director. "The thing is we're so early in the advanced AI era, this can be fixed with even a moderate amount of time and attention."

Uses out of context

Dixon and Kaye noted some of those problems may have been introduced with good intentions. In some instances, "attempting to de-bias AI systems by abstracting, simplifying and de-contextualizing complex concepts such as disparate impact," could lead an AI system to make decisions out of context or unfairly prioritize one subgroup in an attempt to avoid discrimination, according to the report.

For instance, researchers found three AI tools explicitly referred to the U.S. Four-Fifths Employment Rule without proper context. The 1978 rule is used to detect adverse impacts in the selection of a specific group in a hiring process. The WPF noted U.S. employment officials have warned against using the rule as the sole metric evaluating a selection process' fairness and legal scholars have questioned its ability to adequately measure hiring disparities.

Still, Dixon and Kaye found the metric is used or referred to outside of employment environments and the U.S. in some tools. Those included the Aequitas tool developed by the University of Chicago, the AI Fairness 360 tool developed by IBM and donated to the Linux Foundation AI Foundation, and a tool developed by BlackBoxAuditing.

The former tool is mentioned by the Monetary Authority of Singapore in its FEAT Fairness Principles Assessment methodology and refers to the four-fifth rule specifically. Kaye noted researchers did not know enough about the Monetary Authority's AI tool to say how much its reference to the four-fifths rule affects how the algorithm works. But the fact that it is referred to at all is concerning, she said.

"It's kind of spreading around the world as an easy automated way to measure for disparate impact, even though it's definitely been scrutinized legally and technically," she said, referring to the Four-Fifths rule.

Other concepts that often crop up in AI tools are the Shapley Additive exPlanations method and Local Interpretable Model-agnostic Explanations. Both are used to explain how AI systems reach certain conclusions and have gained popularity because they can be applied to several different types of models, Kaye and Dixon said. Researchers found six of the 18 AI tools used by national governments referred to or mentioned one or both of the applications.

The problem, Dixon and Kaye found, is that the uses of SHAP and LIME are typically used to explain the decision-making of a single instance of a model output, not the entire AI model itself. When used to try and explain complex AI models such as non-linear machine or deep-learning models, the methods can produce misleading results.

In one example, India's NITI Aayog, a public policy think tank run by the government, referred briefly to both SHAP and LIME in a 2021 paper detailing responsible development of AI in the country, according to the WPF's findings. 

Dixon and Kaye also looked at the Organisation of Economic Co-operation and Development's Catalogue of Tools and Metrics for Trustworthy AI, which Dixon helped create. The catalogue is meant to be a guide for OECD members looking for trustworthy AI tools to use. But researchers found 15 entries that either used or referred to the four-fifths rule, SHAP or LIME.

Karine Perset, who leads the OECD's working party on AI governance, said her organization is aware of and agrees with the concerns in the report. She said the OECD does not have the ability to vet every tool included and does not endorse or recommend tools in the catalogue either.

Perset said the OECD is working on adding the ability for parties to notify them about any potential issues with tools and takes feedback on specific tools or options for red-teaming specific applications such as LIME and SHAP.

The fix: Education

Kaye and Dixon said these problematic instances are spreading quicker than researchers can keep up. There are a few reasons why — AI tools are still new, and a body of work evaluating them has yet to be fully established, researchers said.

The WPF also found general scrutiny and research around tools are not made available to end users, developers and regulators. And in some cases, the problematic elements of tools are so integrated into their makeup, even careful researchers might miss them.

Ultimately, both Dixon and Kaye said it is a matter of educating entities about the appropriateness of certain AI tools.

"By the time we finished this report, it had become so clear to us that context matters so much for an AI system and AI governance," Dixon said. "You've got to take into account the local, regional context, and even the cultural context."

The report also suggested the AI governance community advocate for a standardized method of tool evaluations, such as International Organization for Standardization 9001 quality management system. That tool's continuous improvement cycle and flexibility would work well across various AI governance applications, although the report noted "there will likely be a period of adjustment and experimentation as tool developers, publishers, and others test and fine-tune the PDCA and/or the PDSA cycle specifically for AI governance tools."

The report also recommends developers and users put in place robust documentation standards for AI tools, so that their use and methods can be easily understood. Such standards will help researchers better test and track what tools work well and when they are appropriately used, researchers said.

Kaye said she hopes stakeholders see the report and takeaway that while regulatory efforts and AI tools might be new within the last few years, research into what works and what does not has existed for years — and can provide a pathway to ethical, accurate AI usage.

"We wanted to show policymakers that there's this really rich body of scholarly, technical and sociotechnical literature that has already done a lot of work; that can educate and inform how they operationalize AI principles for responsible AI and how they actually implement AI governance," she said.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.