In June, the IAPP AI Governance Center published the resource "Key Terms for AI Governance." This glossary consolidates many commonly used AI governance terms in the technology policy circles for both aspiring and accomplished AI governance professionals.
The initial glossary, developed by former IAPP Westin Research Fellow Amy Olivero, consisted of 61 key AI governance terms. This has been updated following further research and consultation with leading AI governance professionals, such as those on the IAPP AI Governance Center Advisory Board. The updated glossary follows a methodology similar to that of the original published in June and has been developed with reference to legal, policy and industry documents on AI governance. Given that AI governance is in constant flux due to the technology's rapid and continuous developments, an update was necessary. The updated glossary contains eight new key terms:
- Adversarial machine learning
- Compute
- Deepfakes
- Disinformation
- Misinformation
- Parameters
- Semi-supervised learning
- Transformer model
Some of the additions are fast becoming — if they are not already — necessary terms in the context of risks posed by AI, especially generative AI, which can potentially exacerbate risks such as those raised by "deepfakes," "disinformation," "misinformation" and "adversarial machine learning." These terms collectively represent the harms of AI. Except for adversarial machine learning, the other three harms are not new, but are certainly catalyzed by AI. They carry the potential to threaten democracy, civil rights and human dignity. As AI harm is one of the most important themes under AI governance, adding terms that define said harms was necessary. The addition of "transformer model" and "semi-supervised learning" complements the definition and understanding of generative AI. This is because the architecture of transformer models has led to the rapid and widespread success of generative AI, such as in large language models. Semi-supervised learning is crucial to understanding how generative AI models learn from data. "Compute" deserves a spot on the list as it is quickly becoming a top challenge for AI law and policy for two key reasons. First, unequal access to processing resources can concentrate power and wealth among a select few in the technology industry. Second, the governance of and access to compute can speed up or slow down AI progress. The term "parameters" was added to the glossary to recognize the governance work focused on the processing stage.
In addition to new terms, existing definitions also received minor yet important edits. Some were edited to improve accuracy, for example the definition of large language model was updated to explain that the word "large" also refers to the size of the dataset in addition to the size and number of parameters. Similarly, the Turing Test definition was edited to provide a fuller picture of what the test entails. The definitions of some terms were expanded to reference other existing definitions. For example, the definition of generative AI now includes a contrast with discriminative models to explain how the two neural network techniques differ.
AI governance has become, and will continue to grow as, a multidisciplinary field. A common vernacular is essential, so meanings are not lost in translation as professionals work across disciplines and with other stakeholders.
The IAPP will continue to monitor the changes in this space and update the glossary accordingly to keep pace with new developments and empower AI governance professionals to stay up to date with changing technology and regulatory environments.