As companies excitedly leverage emerging artificial intelligence technology, the call for oversight and ethical use cannot be understated. Meanwhile, with their experience balancing organizational aspirations and legal realities, privacy professionals are increasingly finding themselves tasked with managing AI governance.

While privacy pros have a wealth of knowledge and experience with sensitive categories of information, such as personally identifiable information, and the legal requirements that come with processing and storing that information, this is not sufficient to address the arising challenges AI technology presents.

AI governance should be a shared responsibility, leveraging multiple assessments carried out by multiple teams.

AI assessments and the need for collaboration

Companies introducing AI systems should establish AI governance programs that include policies, standards and procedures from business case justification to decommissioning.

There are several existing processes through which privacy pros can incorporate AI governance, including leveraging existing privacy impact assessments. Even so, new means of documentation and analysis are necessary to capture the nuances of AI systems from procurement to post-deployment.

AI assessments have been proposed to ensure systems are reliable, ethical and compliant, ultimately fostering trust and accountability in their initiatives. To that end, privacy experts have begun considering a multistage AI assessment approach as opposed to one broad assessment, one pre-production and numerous post-production.

Assessing an AI system requires an in-depth understanding of how the system is built, what data it draws upon, where that data flows to and from, how the data is kept secure, and how the AI model is built to be secure. This nuanced understanding of architecture and engineering may not be in the toolbox of privacy-turned-AI-governance professionals.

Instead, these concerns may be more familiar to their counterparts in security and risk, who regularly probe new technology prior to its operation — asking technical questions around the flow of data, implementation of controls and ways to continuously monitor new solutions for vulnerabilities.

This is why a multistage, interdisciplinary approach should be used to review AI systems in greater detail at various stages in the AI life cycle. Having smaller, more specialized AI assessments spread throughout the life cycle allows subject matter experts from other business departments to thoroughly review AI systems, spreading out the workload.

A multistage interdisciplinary approach

Breaking up the assessment of an AI system into multiple smaller, more specialized assessments is advantageous for conducting more thorough reviews of the system, carried out by those in the company with the most specialized knowledge.

Prior to procurement, for example, vendors and software-as-a-service solutions are put through a series of assessments including requests for proposals, security reviews and legal reviews. Questions related to the AI system can and should be incorporated into the RFP when evaluating multiple AI systems for a business purpose.

The AI system selected should then be rereviewed by the security and third-party risk teams prior to procurement, to identify any major risks the business will need to accept, implementation and monitoring challenges, and red-flags legal will need to include in the final agreement with the vendor.

After procurement, the actual use of the AI system by an internal team will need to be assessed on multiple fronts, including assessing whether the internal team has processes and procedures that speak to the ethical and responsible use of the AI system. Another example is reviewing the data used to support the AI and determining whether it is secured correctly and used in line with the data-minimization and purpose-limitation principles established in the company's privacy notice.

A multistage approach allows for interdisciplinary collaboration at key milestones, with multiple teams taking a deep dive at different angles on an AI system.

Decentralization challenges and solutions

This decentralized approach, however, is not without its drawbacks. It can be difficult for the internal business sponsor of the AI system to keep track of feedback from multiple assessments and departments. Spreading out the assessment in this way can also slow down the approval and review of the AI system.

It is especially challenging when multiple reviews are conducted simultaneously. For example, during the development stage, the privacy team may review the purpose limitation of the data supporting the AI system, while the security team may review the transfer of data from the database to the AI system and from the AI system to a business intelligence solution.

The privacy team can advocate for the data to be segregated into a new table separate from other information. Simultaneously, the security team can deem the transfer of data from the original table into the AI system as secure if the correct encryption is used from end to end. This can be confusing for an internal business sponsor to wrap their head around and convey back to their team.

This is why a company should establish an AI governance council to manage the AI life cycle. The council should establish a workflow to streamline the assessments performed and provide feedback for the internal business sponsor to remediate. The council minimizes confusion by serving as a central governing body to dictate actions and convey information.

The council not only establishes the workflow but sets the service-level-agreement response time for AI assessments to be completed. Through policies and procedures, SLAs should be set for teams to follow as they perform AI assessments to keep the workflow moving and to promote transparency among the council and business sponsors.

Teamwork makes the dream work

In navigating these new and exciting times, AI governance professionals should leverage these thoughts when establishing and improving upon AI governance within their organizations.

While privacy pros may be among the first wave of certified AI governance professionals, we cannot do this alone. Reach out to your counterparts in security, risk and wherever else to spread out this shared responsibility.

Casey Flores, AIGP, CIPP/US, is lead information security analyst, GRC and privacy, at Tailored Brands.