In February, eight global technology companies committed to apply the U.N. Educational, Scientific and Cultural Organization's Recommendation on the Ethics of Artificial Intelligence, not only making public their will to use AI ethically, but demonstrating that far-sighted businesses are preparing the AI governance processes necessary to comply with AI regulations that will soon come into force. Adopted by UNESCO's 193 member states, this public-private alliance has the potential to strengthen international-level AI governance.

Regulatory frameworks for AI technologies are often informed by principles that have been established through international consultations. These consultations are usually formalized as one of three instruments: internationally acknowledged recommendations, voluntary frameworks or standards. Well-known examples of these instruments include recommendations such as UNESCO's and the Organisation for Economic Co-operation and Development's Recommendation of the Council on Artificial Intelligence, voluntary frameworks like the U.S. National Institute of Standards and Technology's AI Risk Management Framework, and standards such as the International Standard for Organization and International Electrotechnical Commission's standard 22989 on AI concepts and terminology and standard 42001 on AI management systems. 

Through their fundamental principles and guidelines, these instruments may offer organizations early visibility on forthcoming regulations. Fundamental principles often drive emerging legal requirements and guidelines may offer operationalization strategies for AI governance frameworks capable of meeting legal requirements and helping organizations appreciate how straightforward or complex achieving compliance will be for various systems or contexts.

Fundamental principles

Across the three instruments, the principles of fairness, transparency and accountability keep recurring and are, therefore, most likely to be integrated in future legislations. While some instruments — like UNESCO's recommendations and the NIST framework — include explicit privacy protection principles, others elaborate on key privacy principles in the context of AI or incorporate privacy as part of other principles. This is the case, for example, in the OECD's AI recommendation, which mentions privacy and data protection within principles of "human-centred values and fairness" and "robustness, security and safety."

Analyzing how these principles are defined by different instruments will provide a first glimpse of the knowledge and processes organizations will need to establish. It will also help companies identify priorities and focus points while interpreting fundamental principles. 

Take the fairness principle as an example. The OECD's "human-centred values and fairness" principle requires that AI actors "respect the rule of law, human rights and democratic values" and "implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art." UNESCO's "fairness and non-discrimination" principle, instead, focuses on the need to ensure that the benefits of AI technologies are available and accessible to all. It highlights the necessity for equitable access to, and participation in, the AI system life cycle irrespective of race, gender, age, religion, disability, etc. It indicates that all reasonable efforts should be made to minimize discriminatory or biased applications, and that effective remedies should be provided in case of violations.

The OECD and UNESCO's fairness principles cover a somehow wider spectrum of concepts than those covered by ISO 22989 and the NIST framework, which define fairness solely in relation to bias and discrimination. ISO 22989 defines unfairness, rather than fairness, as the type of bias that results in the "unjustified differential treatment that preferentially benefits certain groups more than others." NIST's framework outlines characteristics of trustworthy AI, including "fair with harmful bias managed," specifying that "fairness in AI includes concerns for equality and equity by addressing issues such as harmful bias and discrimination." It also highlights tradeoffs that can be interpreted as potentially increasing complexity in compliance. For example, complying with privacy requirements, like the principle of data minimization, may make it more difficult to assess fairness whether this is understood as equitable access or management of harmful bias.

Guidelines

Guidelines generally provide more actionable input for establishing organizational AI governance.

For example, NIST's AI framework places AI governance at the center of four functions: govern, map, measure and manage. The associated NIST AI RMF Playbook, which details actions for achieving the outcomes laid out in the NIST AI framework, is probably the most complete and detailed set of guidelines for AI governance so far. For each function, it provides detailed definitions, suggested actions, documentation guidelines and references.

UNESCO’s Ethical Impact Assessment, a companion document to its Recommendation on the Ethics of AI, guides AI system procurers and developers in assessing whether suitable choices are made with respect to data and algorithms, and if appropriate governance is established. The document can help in identifying roles and responsibilities, as well as risks and associated prevention, mitigation, redressal and monitoring measures. As with most impact assessment instruments, this should be a live document.

As a final example, ISO/IEC 42001 helps organizations in establishing, implementing, maintaining and continuously improving an AI management system that is integrated with the organization's processes and overall management structure. Covering roles and responsibilities, planning, resources, communications, operations and performance evaluations, the standard provides a good overview of what it takes to establish an AI governance structure, including a detailed description of the controls that may be implemented to meet organizational objectives and address risks. The joint technical committee on AI, which published this standard, is also developing a set of other standards that will certainly help in operationalizing AI governance.

The number of instruments available, whether recommendations, voluntary frameworks or standards, is already substantial and is increasing rapidly. Large tech companies are also developing their own voluntary frameworks and often provide public guidelines and reports on their experiences.

Instruments are also explored at the intersection of privacy protection — for which many organizations already have a clear governance structure — and trustworthy AI. The OECD recently launched the OECD.AI Expert Group on AI, Data, and Privacy, and privacy regulators are playing an active role in supporting organizations. The U.K. Information Commissioner's Office, for example, produced guidance on AI and data protection, while France's data protection authority, the Commission nationale de l'informatique et des libertés, published an action plan for the deployment of AI systems respectful of individuals' privacy, principally focusing on specific applications including augmented cameras and generative AI, particularly chatbots.

While navigating such a large number of instruments may be overwhelming, being familiar with at least some of them will strengthen organizational understanding of how to plan AI governance and prioritize efforts, the major hurdles that may be involved, and the most appropriate solutions to implement.