TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | Launching an AI governance program? Start with your ‘why’ Related reading: European Parliament vote pushes AI Act significant step forward

rss_feed

""

Nearly 30 years ago, the great astrophysicist Carl Sagan wrote, "I have a foreboding of an America in my children's or grandchildren's time, when awesome technological powers are in the hands of a very few and no one representing the public interest can even grasp the issues."

His words seem prescient now that a tool as powerful as artificial intelligence is pervasive in our everyday lives. Companies are increasingly leveraging AI in various capacities to drive both revenue growth and operational efficiencies.

Despite its benefits, AI can present a number of legal and ethical risks, and companies that are providing AI services — or considering interacting with AI in any capacity — should be mindful and intentional about AI governance. In fact, most companies haven't implemented processes to ensure their AI is trustworthy and responsible, even though more than a third globally report using AI in their organization.

With that in mind, let's explore tips and best practices for launching a successful corporate AI governance program.

What does AI governance mean?

AI governance is a system or framework designed to direct, manage and monitor the AI activities of an organization. If your company is designing, developing or deploying AI systems, you should consider building out an AI governance program and tailor it according to how your company interacts with AI systems.

This is not a one-size-fits-all effort. Similar to other governance programs, such as privacy and information security, it is critical to scope out and understand how your company leverages AI throughout the organization, and tailor your governance program according to your specific organizational needs.

Your governance program should also be designed to flex as needed, as your company shifts and increases its interaction with AI over time.

Where do you start when building an AI governance program?

Before you start building out your AI governance program, it’s helpful to start with "the why." Why does AI governance matter, and why does it matter for your company? The regulatory environment is, of course, a critical component. Regulations applying to AI already exist, and the regulatory landscape is changing rapidly.

For example, consider comments made by Federal Trade Commissioner Alvaro Bedoya during the IAPP Global Privacy Summit 2023, where he stressed the fact that contrary to the views of many, AI is already regulated in the U.S.

The Organisation for Economic Co-operation and Development is also tracking the more than 800 AI policy initiatives underway across 69 countries and territories, including developments in the EU with the proposed EU AI Act.

Despite this raft of regulation, AI governance is more than just compliance. It's about doing the right thing, and tying your "why" back to your end users — your clients, consumers and employees — and the benefits intended for those different stakeholder groups.

Moreover, 57% of global consumers view the use of AI in collecting and processing personal data as a significant threat to privacy, according to the 2023 IAPP Privacy and Consumer Trust Report.

So, when you’re thinking about the "why," it is important to appreciate that AI's presence is continuing to grow and becoming increasingly prevalent in our daily lives. There are growing concerns about the potential risks and challenges associated with AI, such as bias and discrimination, job displacement, privacy and security, and ethical and social issues.

It's especially critical to connect your "why" for AI governance back to your company's vision, mission and core values, so you build your AI governance program while accounting for organizational context. Otherwise, your AI governance program may be viewed internally as a stand-alone, separate program that is not inherently connected or embedded into your company's culture and day-to-day operations.

How do you launch an AI governance program?

Consider your stakeholder group. When launching an AI governance program, it's important to identify and include a core set of stakeholders with functions and roles that can identify and mitigate AI-related risks specific to your organization, including but not limited to your legal, compliance, risk, information security and data governance teams. Your stakeholder list may need to be fluid as you scope out AI dependencies across your organization and discover additional uses and interactions with AI systems across teams.

In addition, as you launch your program, you should address the following:

  1. Align on key terms and definitions. There are a number of terms and definitions requiring alignment within an AI governance program. For example, how do you define AI, responsible AI, algorithmic decision making and generative AI?
  2. Map AI dependencies across an organization. In order to properly scope your AI governance program, you need to understand how your organization currently interacts with AI, as well as future plans to leverage AI in day-to-day operations. Overall, this exercise will help mitigate the likelihood for organizational blind spots, which could lead to avoidable risk exposure, as you build your program.
  3. Leverage existing governance programs. Try to avoid starting from scratch where possible and leverage existing governance programs already in place.

The IAPP Privacy and AI Governance Report found 50% of organizations building new AI governance approaches are building on top of existing, mature privacy programs. Forty percent also reported building algorithmic impact assessments on top of existing processes for privacy or data protection impact assessments.

Starting from existing privacy programs makes sense from an organizational efficiency standpoint, especially given the likelihood of substantial overlap between your selected stakeholders for AI and privacy governance, such as legal, information security, risk and compliance.

How do you structure an AI governance project plan?

After the launch for your program comes the development of your governance plan, which outlines your specific AI governance policies, accountability structures and organizational processes.

Consider structuring your governance plan around the "Govern" function in the National Institute of Standards and Technology AI Risk Management Framework 1.0. The NIST AI RMF is a voluntary resource for organizations designing, developing, or using AI systems to manage AI risks and promote trustworthy and responsible AI.

This framework includes an element called the NIST AI RMF Core, which summarizes four key function areas: Govern, Map, Measure and Manage. Each function is then divided into categories and subcategories. These functions could be used in any order, but NIST recommends starting with the Govern function, as it supports the other functions.

The Govern function includes the following six categories your AI governance project should plan cover:

  • Category 1: Policies, processes, procedures and practices across the organization related to the mapping, measuring and managing of AI risks are in place, transparent and implemented effectively.
  • Category 2: Accountability structures are in place so the appropriate teams and individuals are empowered, responsible and trained for mapping, measuring and managing AI risks.
  • Category 3: Workforce diversity, equity, inclusion and accessibility processes are prioritized in the mapping, measuring and managing of AI risks throughout the lifecycle.
  • Category 4: Organizational teams are committed to a culture that considers and communicates AI risk.
  • Category 5: Processes are in place for robust engagement with relevant AI actors.
  • Category 6: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data, and other supply chain issues.

Leverage the above structure as a foundational component of your project plan, and tailor it for your organization by designing workstreams and assigning owners for each category and subcategory. You can also consider including a priority score for each workstream and a set of action items, goals and target completion dates to help keep your plan on track.

Building a responsible AI governance program is never easy, but it is essential if your organization interacts with AI. Remember, start with your "why," use existing frameworks whenever possible and leverage key stakeholders across relevant functions. With this sequence, you can set your program up for success, and help your organization mitigate and address AI-related legal and ethical risks.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

1 Comment

If you want to comment on this post, you need to login.

  • comment Lambda Winner • Jul 17, 2023
    There are some programs that we are seeing and I hope that it will be interesting to see the launch. It was interesting to see that kind of program and I feel like there are some interesting changes we see.