Anyone who hasn't been hiding under a rock for the last 18 months knows artificial intelligence is the buzz of the privacy profession.

With rapid developments in capability and availability, many privacy professionals are naturally starting to question how the safe and responsible use of this technology can be ensured.

Last year, the International Organization for Standardization published a new international standard, ISO/IEC 42001:2023, providing a template AI Management System organizations can adopt.

AI management system clauses

The standard outlines clauses that set out the high-level requirements of establishing an AIMS and that must be followed and implemented by any organization seeking certification.

Context of the organization: Like many other standards, ISO/IEC 42001 requires organizations to fully understand their context by determining the external and internal issues relevant to the AIMS, understanding the needs and expectations of interested parties, and determining the scope of the AIMS. This must include understanding the organization's role relative to the AI systems — for example, provider, producer, user and partner.

Leadership: The standard requires organizations to set a clear tone at the top in relation to the AIMS. This includes establishing an AI policy and defining roles, responsibilities and authorities.

Planning: The standard sets out requirements to design processes to carry out AI risk assessments and processes to treat the identified risks. There is also a specific requirement to define a process for AI system impact assessments, which involves assessing the potential consequences for individuals and society that can result from the development or use of AI systems. The assessment must include the deployment, intended use and any "foreseeable misuse" of the AI system.

Support: Organizations are required to determine and provide the resources needed to establish, maintain and improve the AIMS. There is a specific requirement for the organization to ensure individuals maintain appropriate competencies and maintain evidence of this. Awareness is also crucial, with requirements to ensure employees are made aware of the AI policy and other key items. The standard also requires organizations to set out a communication schedule for both internal and external communications.

Operation: Requirements are outlined for performing the risk assessment, risk treatment and impact assessment processes established as part of the planning clause and for documenting the results.

Performance evaluation: The standard requires organizations to monitor and measure the performance of the AIMS, but provides freedom to determine what exactly should be monitored and measured. Like all type A standards, there is a requirement to embed an internal audit program, which assesses the AIMS. There must also be a management review process in which the organization's management team periodically meets to ensure the effectiveness of the AIMS.

Improvement: The standard requires a commitment to continual improvement of the AIMS. This includes addressing identified nonconformities and implementing corrective actions.

Controls

These are the controls organizations can apply against the overall organizational AI risk assessment. Organizations must justify the inclusion or exclusion of any controls.

Policies related to AI: A policy for the development and use of AI systems must be in place, and the impact it will have on other organizational policies must be considered.

Internal organization: Organizations must define and allocate relevant roles and responsibilities and implement measures to report concerns about AI within the organization.

Resources for AI systems: Organizations must identify resources required during the AI life cycle, including data utilized by the AI system, tooling resources, system and computing resources, and, of course, human resources, including competency.

Assessing impacts of AI systems: Processes must be established for assessing the impact of the AI system on individuals, groups and society. These assessments must be carried out and documented accordingly.

AI system life cycle: Organizations must establish objectives and associated processes for the responsible development of AI systems. They must also use design and deploy verification mechanisms, deployment plans, and operational and monitoring processes. There must be associated technical documentation and system recording or event logs.

Data for AI systems: Organizations must embed appropriate data management processes related to the data's use in the AI system, which cover acquisition, quality, provenance and preparation of the data.

Information for interested parties: Relevant internal and external interested parties must be appropriately updated. This includes documenting information for users of the AI system, external reporting on adverse impacts and communicating incidents. Organizations must document their obligations to report information about the AI system.

Use of AI systems: Objectives and processes for the responsible use of AI systems must be defined and the system must be used as intended.

Third-party and customer relationships: Organizations must allocate responsibilities to third parties within their AI system supply chain. This includes ensuring the use of AI systems provided by suppliers aligns to the organizations' approach to AI and the use of AI aligns with customers' needs and expectations.

Key takeaways

  • The ISO/IEC 42001 standard is light on detail and does not impose prescriptive requirements or set out guiding principles related to the use of AI. This will suit some organizations that are already developing their own frameworks but will not benefit organizations looking for an all-in-one self-regulation template.
  • By extension, the standard is jurisdiction neutral. Therefore, it may work well for multinationals looking to implement a single group framework or organizations based in jurisdictions without comprehensive AI regulation. However, it may not be a golden ticket for organizations that need to meet specific legal requirements.
  • When it comes to designing and carrying out AI system impact assessments, it is natural this would take a similar form and structure to a data protection impact assessment or privacy impact assessment. Given the substantially similar objective — assessing impact to individuals — it is likely organizations will benefit from integrating these two processes.
  • Throughout the standard, there is a focus on "competence" of people working with AI. Organizations need to ensure the right people are in the right places. This will include those working on the actual operation of AI systems, as well as those working on the governance of AI — including safety, security and privacy.
  • The standard is "Type A," meaning organizations can be certified by an external body as conforming to the standard. However, those seeking certification should ensure a certifying body registered with their national accreditation body is used.
  • The standard is part of ISO's "harmonized structure," meaning the clauses and terminology match many of the organization's other standards and will be comfortably familiar for any privacy profs that have been involved with ISO/IEC 27001:2022 or ISO/IEC 27701:2019. It can also be integrated with organizations' other management systems.

The ISO/IEC 42001 standard offers a flexible framework for AI management systems, focusing on high-level requirements and adaptable processes. But, like all ISO standards, it cannot be treated as a substitute for compliance with local laws and regulations.

Organizations may use the standard as a starting point, but will still be required to put in a lot of legwork to implement an effective AI governance program.

Henry Davies, LL.M, CIPP/E, CIPM, FIP, is the data protection lead for Europe, the Middle East and Africa at Likewize and is a member of the IAPP's Certification Advisory Board.