With the potential risks involved in using artificial intelligence — particularly generative AI — for business practices, it is important that companies assess AI systems for risk to the organization.

Much like other forms of third-party risk management, organizations should complete an initial assessment, then establish a regular cadence for assessments throughout the technology's life cycle.

Companies using generative AI systems should implement an appropriate AI governance program that includes policies, standards and guidance covering the full life cycle of employing an AI system, from procurement to sunsetting.

Embedding AI assessments within a governance program helps organizations ensure their systems are reliable, ethical and compliant, ultimately fostering trust and accountability in their AI initiatives.

Procurement

Prior to procuring an AI system, it's essential to understand what the system is intended to do for the company. Too often, companies onboard a technology product thinking it will help achieve certain goals only to find they didn't understand their goals well enough.

The initial assessment process should begin with understanding all the use cases intended for the AI system, including for its current and desired state. It's also important to consider the onboarding process, budget, resources and support required to implement the technology.

Involving multiple business units in this determination will enable a better ability to understand the system's full scope, as well as an opportunity to show its value to the organization. It may be possible to leverage the budget of multiple teams to ensure the resulting tool is the right fit.

Creating a comprehensive AI assessment

The goal of an AI assessment is to ensure the system is fair, compliant, performing as expected and assists the organization in meeting applicable strategic goals. An AI assessment should be thorough and look at how the technology will be used, its accuracy and performance, proportionality and appropriateness, privacy and security aspects, and ethical considerations.

Pre-deployment assessment. Prior to choosing an AI tool, organizations should get an idea of the purpose and scope of its use. Understanding how different business units intend to use the AI tools implemented and the goals they seek to achieve is important. And, since goals and practices change, potential future uses as well as existing needs should be explored. 

To make sure the tool aligns with legal obligations and internal policies — and to achieve the goals set out for it — it's also important to understand how the system works, including its key features, the types of data necessary to input and the expected outputs.

Additionally, it's important to know how the system is deployed. Is it on-prem or cloud based? How will data be protected if it's cloud-based? Will it be used to train the model? And who else will have access to it — even if it's in a de-identified state?

And, of course — as with any other new tool or system — when an AI program impacts personal information, conduct a privacy risk assessment.

Post-deployment assessments. Organizations should appoint an owner for AI programs and task that individual with establishing a regular cadence to review AI practices against regulatory requirements, internal policies and privacy notices, where applicable.

Once the tool is implemented, regularly review and update the information in the pre-deployment assessment. Continue to regularly assess how the organization is using the system and that it's working as intended and expected.

To that end, AI assessments should include methods to test the accuracy of outputs and look for biases across different demographic groups. Be sure to measure the effectiveness of any steps taken to promote fairness in this process. Look at what data is being put into the system, find out how employees are using it, and compare that to the pre-deployment assessment.

If the AI system impacts personal information, privacy and data protection concerns will also need to be reviewed. Even in the absence of AI-specific legislation, privacy and data protection laws apply when AI systems use personal information.

Depending on the jurisdictions in which an organization operates, regulatory obligations will change. But aligning an AI program with fundamental privacy principles like transparency, data minimization, access and choice will help ensure an organization meets legal obligations and customers' expectations.

When, how often to assess

Once an AI system is procured, companies should perform regular assessments to ensure ongoing oversight and maintain the system's effectiveness, fairness and compliance. The frequency and timing of assessments depend on factors including the risk of the AI system to the organization and individuals, how the organization uses the system, and changes in the technology and regulatory environment.

When determining the cadence of assessing AI systems, organizations should include internal and external factors. Look at the laws applicable to the organization — some regulations may require periodic evaluations to ensure AI systems comply with laws related to data privacy, fairness and accountability. For example, the EU Artificial Intelligence Act, Colorado AI Act, and others, require assessments on high-risk AI systems.

Additionally, certain industries, like health care and finance, have stringent regulatory requirements that mandate regular assessments to ensure compliance with legal and ethical standards.

Then, look at how the organization is using the tool, the types of information the system will process, and for what purposes. Consider that information in the context of the company's risk profile. Organizations with low risk tolerance may conduct more frequent assessments to mitigate potential risks associated with AI deployment. Internal policies, ethical codes and customers' expectations should also be considered, and organizations should be realistic about available resources to determine the proper cadence.

Once that is established, there may be circumstances in the AI landscape under which systems should be reassessed, such as when regulations or industry standards change. Plus, many consumers are new to the idea of AI and their attitudes and expectations around it will evolve in ways organizations will want to respond to.

There may also be internal changes to warrant an off-schedule assessment. Business practices change, the organization may expand to new jurisdictions with different laws and expectations, significantly change how the tool is used or expand the data inputs in ways that needs a closer look.

Updates to the AI model or the technology may require a review of its continued effectiveness and compliance. A breach, vulnerability, outputs that don't look right, or poor user feedback may lead to an off-schedule assessment.

Prepare for these events with a documented AI assessment procedure and template, and make sure stakeholders who will be involved in an assessment are aware of their responsibilities.

Documentation, reporting and maintenance

Assessments, of course, are only as good as how well organizations respond to their findings. This means all assessments should be documented and tracked over time to identify patterns or systemic issues. Additionally, organizations will want to create a reporting mechanism to ensure results of assessments are communicated to leadership and other stakeholders.

Organizations will implement AI assessments differently. Some will choose to blend an AI assessment into an existing privacy impact assessment process, while others may treat them as standalone assessments.

Software solutions can help with assessment automation and documentation, or it can be done manually.

Ultimately, the avenue that has the greatest likelihood of being followed successfully is the right one for the organization. 

Once assessments are documented and reviewed, organizations should update AI systems based on the results to ensure they continue to work efficiently and fairly. This includes adapting assessments and their cadence.

Successful AI assessments require a mix of people, process and technology. Once a process is created and the technology is in place, organizations need to provide training on the importance of conducting AI assessments, when they should be done and how to go about doing so.

A well-maintained AI system has myriad benefits for organizations and appropriate assessment practices will help ensure they continue to propel the organization toward its business and strategic goals.

Jodi Daniels, CIPP/US, is the founder and CEO of Red Clover Advisors. This article does not constitute legal advice.