In July, the IAPP AI Governance Center updated its Key Terms for AI Governance. This list provides artificial intelligence governance professionals with a reference set of definitions to help navigate the practice of AI governance.
The initial set of key AI governance terms developed in the fall of 2023 by former IAPP Westin Research Fellow Amy Olivero consisted of 61 concepts. Given the constant and rapid evolution of the development and use of AI since then, we continue to update both the terms and definitions. This list now boasts more than 100 relevant concepts AI governance professionals use regularly.
The ongoing updates to the key terms reflect current and best-in-class understanding of key AI governance concepts. Updates are based on AI use and issues raised by the IAPP community and collaborators. These definitions are aligned with, or reference, authoritative definitions and concepts found in AI governance legislation, frameworks and reports from governments, international organizations and other trusted stakeholders.
Part of the effort, as this list has evolved, has been to ensure that these key terms capture concepts covered in the IAPP AIGP training and certification as well as IAPP-led AI governance research. Consistently using terms has become an important aspect of this work: it’s a crucial way to help support AI governance professionals. As AI technology is advancing at such a rapid pace, a common understanding of terminology is critical for AI governance teams to be able to have clear, effective and consistent conversations.
Recent updates include the inclusion of the following terms: agentic AI, autonomy, counterfactual, data drift, fail-safe plans, retrieval augmented generation, shadow AI and weights.
Adding these terms became important with the emergence of agentic AI since the last update of this list. Evolving from large language models and chatbots, concepts like agentic AI and retrieval augmented generation are terms that AI governance professionals need to be aware of and determine how to manage within their organizations.
For example, with agentic AI, AI governance professionals have additional considerations to take into account given these systems have some degree of autonomy that could lead to unanticipated risks. Additionally, AI agents are being incorporated into pre-existing technical software that exposes more people across all types of roles within a company to technology that they may not have rules for yet. Having a common understanding of what these systems are allows AI governance professionals to work with teams across their organizations to establish rules that make sense for their specific context.
In addition to new terms, existing definitions also received minor yet important edits. Some were edited to more accurately reflect existing definitions in other digital responsibility areas, especially privacy. A common vernacular is essential so meaning is not lost in translation as professionals work across disciplines and with other stakeholders. The IAPP will continue to monitor the changes in this space and update the key terms accordingly to keep pace with new developments and empower AI governance professionals to stay up to date with changing technology and regulatory environments.
Ashley Casovan is the managing director for the IAPP AI Governance Center and Richard Sentinella is the former AI governance research fellow at the IAPP.
