As artificial intelligence has gained traction in global society, an increasing amount of attention is being paid to how to develop those systems responsibly. 

Enter the role of an algorithmic auditor, a position that entails inspecting a product for any potential deficiencies or biases. It is a task that companies will have to take seriously: the EU's AI Act requires models with greater impacts on society to meet higher risk management standards. And U.S. President Joe Biden's executive order on AI included a provision requiring developers to share their red-team results with the government to identify any potential risks.

That is the context within which the International Association of Algorithmic Auditors was established late this year. Founded by a group of leaders in the technology ethics field, the organization aims to be a central hub for auditors by providing resources, training and eventually certification standards. But it also hopes to be a player in creating auditing policies companies will have to consider when developing AI governance programs.

Gemma Gladon-Clavell, the founder and CEO of Eticas Consulting and one of the lead members of the IAAA, said the need for an association became apparent last year, when she noticed many policymakers were discussing similar themes around AI, but not communicating those similarities with each other. She and some other members gathered in Barcelona and Brussels to meet with those stakeholders, who encouraged them to create a formal body that could be consulted.

"One of the things we are seeing is that a lot of people providing these AI governance platforms for companies to integrate, they often work in lots of different ways," she said. "None of them have been bound by any kind of external control, and even some of our clients were like, 'We invested in this platform that's supposed to do bias identification and mitigation,' and we would have to look at it and say 'But that's not what it's doing.'"

"So how to organize and create incentive for better accounting and bookkeeping, and better documentation of AI processes, all of those things don't exist yet," Gladon-Clavell continued. "And so the auditing community, we can create those incentives in the whole industry for things to be done better, so that by the time we come in and inspect, we can verify those things have been done well."

Audits are frequently conducted within companies to ensure regulatory compliance. But Shea Brown, the founder and CEO of Babl AI and another central member of the IAAA, said those evaluations typically have an internally-focused goal meant to protect companies. Algorithmic audits, in contrast, have an outward focus, he said.

"Regulators, the public, everybody is demanding that these systems not harm people external to the company," he said. "We care about whether this algorithm is going to harm either people or society."

That demand for accountability has grown as the public's concerns on how AI, particularly generative systems, can be used to spread disinformation or amplify bias against marginalized populations. An international survey conducted by consulting firm KPMG and the University of Queensland in Australia found three-fifths of respondents were wary of trusting AI systems, although two-thirds also were optimistic it could improve their lives. The Pew Research Center found there is growing concern about the use of AI in the U.S., with the majority of people perceiving a negative effect on areas such as personal privacy.

The IAAA is particularly focused on creating ethical standards for AI. It promotes resources from the Ada Lovelace Institute and the research organization Data & Society, both of which are concerned with how data and AI can be used equitably and to improve society. Eticas and Babl AI both provide AI bias auditing services.

Those third-party auditing services will be the primary focus on the IAAA, but Gladon-Clavell said the organization will also be a resource to freelancers and internal auditors, with the hopes of professionalizing the industry. That means providing an enforcement mechanism down the line as well, like offering investigations into a member if they do not adhere to developed standards.

"Just like how we don't want vaccines that don't go through clinical trials, we don't want AI systems that haven't been audited," she said. "We need to make sure that the audit process contributes to those to those systems being better for society."