If you were unable to attend the IAPP's inaugural AI Governance Global Conference 2023 in Boston, we have you covered. We attended and summarized several key themes from the event for you.

Artificial intelligence governance programs have some key elements

AI governance programs at many companies consist primarily of some or all of the following components. 

  • A cross-functional stakeholder committee for setting risk tolerance, reviewing AI use cases, and/or developing protocols and policies. 
  • Guiding AI principles, policies and guardrails that provide the company with a framework for buying, developing, integrating, and using AI both internally and externally. 
  • AI impact assessments to document and understand risks, mitigations and expected outcomes from a cross-functional perspective. 
  • Internal processes or third-party tools for testing fairness for AI uses.
  • Training for stakeholders involved in AI development, procurement and use cases.

Privacy teams play an important role

Privacy teams are playing an important role in AI governance because they understand how to assess risks and apply mitigating controls. Privacy teams also have existing process development and assessment protocols that can be leveraged and customized for AI governance. However, privacy teams are not necessarily owning AI governance. At many companies, business stakeholders are stepping up to own or co-own AI governance, especially at companies where AI is playing an important role in the company’s products and service offerings. Other key stakeholders involved in AI governance include security, legal (including IP and litigation), HR, procurement, data science, technology, product, compliance and risk.

Resources are a challenge

Many companies are struggling with resources for AI governance, especially when the issues are viewed solely from a compliance perspective. Some companies are having success finding resources by working with business stakeholders to understand the internal and external opportunities that AI governance programs can help enable. Privacy teams are often working with legal and business stakeholders to appropriately calibrate the risks AI can present, and opportunities it can enable, so that AI governance programs get the necessary business buy-in and can effectively manage risk and enable innovation.

Leverage what you have

AI governance programs do not have to be robust to get started. Many companies started by leveraging processes and policies they already had in place and modified to take into account AI risks and opportunities. For example, companies with existing frameworks for managing vendor risk reviewed and updated these to address AI. Data classification policies were also helpful tools to help determine what data is appropriate to input into third party AI applications, especially when they are accompanied by AI-specific guardrails.

You know how to do this

Developing an AI governance approach is manageable, and if you are a privacy professional, you already have many of the key skills. For example, when it comes to assessing AI uses, you can adapt skills you have learned from assessing privacy risks and apply those skills to this new context. Identify the risks (if any), pick mitigations (if needed), define expected operations fairness and outcomes, test before launch (if the consequences of getting it wrong could cause harm), monitor the AI use once deployed for vulnerabilities and proper operating (if called for), and refine it to achieve objectives. Work cross-functionally to identify who will be responsible for each of these tasks. And, similar to security incident reporting, have a clear path for both internal and external parties to contact the company with possible concerns or issues related to AI use, and policies for how these reports are addressed.

Use an appropriate framework

Pick and adapt, or draft, a framework for AI governance. There are various frameworks, but there was a lot of discussion about the NIST Artificial Intelligence Risk Management Framework. Do not view frameworks as one-size-fits-all; rather, pick and adapt, or draft one, based on how your business operates.

Ignoring or banning AI is not a solid strategy

Do not try to ban AI or ignore the opportunities that AI may present despite the risk. Ignoring important business opportunities, such as for efficiency or innovation, may lead to an even bigger risk to your business. Work to understand the opportunities for the company, both internally and externally and enable them with appropriate risk assessment and mitigation practices that are tailored to business risk appetite.

Regulators are paying attention

On the global stage a variety of regulators are focused on AI, including privacy and data protection regulators. For many, holding companies accountable for AI uses that have not had risks appropriately assessed and mitigated is a priority, especially where this results in harms to people. At the same time, there is no consensus among regulators about how AI risks should be assessed or mitigated.

Disgorgement is a threat

In the U.S., the Federal Trade Commission has ordered disgorgement of data and AI models in consent decrees resulting from investigations and enforcement actions. This type of remedy may be one that U.S. regulators increasingly seek when models are developed in ways that they view as violating the law.

You are not alone 

Benchmark with peers at other organizations. Like governments that are collaborating to address AI principles and codes of conduct, many companies are collaborating and benchmarking to set up their AI governance approaches.

Consult available resources

Many organizations are sharing resources about their approaches to aspects of AI governance. In addition to the NIST framework, look for and consider these resources as you help formulate your company's approach. Multinational principles and codes like the OECD Guiding Principles for Organizations Developing Advanced AI Systems and OECD International Code of Conduct for Organizations Developing Advanced AI Systems show where there is an emerging consensus from regulators. Data protection authorities like the U.K. Information Commissioner's Office and France's Commission nationale de l'informatique et des libertés have issued guidance and resources. Civil society organizations like the Future of Privacy Forum have resources like Best Practices for AI and Workplace Assessment Technologies and the Generative AI for Organizational Use: Internal Policy Checklist. Companies like Microsoft are also sharing resources like its Responsible AI Principles and Approach. The IAPP is also compiling and sharing resources on AI governance.