When prompted to explain artificial intelligence governance in a short joke, this is what ChatGPT had to say:
"AI Governance: because even the smartest algorithms occasionally need a timeout!"
Admittedly, ChatGPT's response is clever. However, it may also hint at why calls for AI governance are proliferating and amplifying around the world. Given the rapid developments in AI recently, do we have the luxury of timeouts?
With AI — especially generative AI — becoming part of the mainstream, and its use and adoption across organizations quickly expanding, it is unlikely, if not impossible, that the world can simply pause for governance to catch up. The time for governance is now.
But what is "AI" or "governance"? Though numerous helpful definitions of AI have surfaced following global calls for governance, technology experts can vouch that AI is difficult to define. Similarly, those working in law and policy can also vouch that governance is difficult to reduce to one crisp definition. In pursuit of understanding both "AI" and "governance" it is useful to move beyond the definitional aspect and engage instead with the mechanics of AI algorithms to understand how risks emerge, how governance applies and why.
Technical side of AI
AI technology has matured considerably, and, in recent years, we have come a long way from traditional programming to generative AI. In the former, programmers would hardcode to teach an algorithm what an elephant is. With generative AI, the computer can tell you everything it knows about an elephant when prompted to do so.
For simplicity, consider AI to be an umbrella term for the computational techniques comprised of algorithms that automate aspects of human intelligence. Today's narrow AI is computationally replicating and not practically embodying human intelligence. AI is not human intelligence, it only mimics certain aspects like rational thinking, speech, decision-making, making predictions and generating content — and it is doing so incredibly well.
Three such techniques for AI are machine learning, deep learning and generative AI.
Machine-learning algorithms learn from data by analyzing it, much like humans learn by observing the world around them. Unlike humans, who follow an intuitive process of learning, machine-learning algorithms use statistics and probability theory. Developers begin the process with a "training dataset," which is the input. Without explicit programming, the algorithm learns and categorizes patterns, structures and relationships within the training data. It does so by analyzing the statistical properties of the data, such as by finding patterns or correlations across data points. Based on what it learns from the training dataset, the algorithm generates a "model," which is comprised of a range of decision-making rules. To assess if the model will perform well in real-world scenarios, it is then tested on test data it was not exposed to during the training process. The aim is to generalize the model so its performance is accurate even for new and unseen data. After deployment, the model makes decisions or predictions on new data and constantly improves performance based on what it learns in the deployed environment.
Deep learning, a subfield of machine learning, is a technique that uses artificial neural networks inspired by the structure of the human brain. This technique identifies complex data structures through multiple layers of neurons, and each layer learns a different aspect of the data. Let's say we want a model to distinguish different types of wild cats through supervised learning. We provide the deep-learning algorithm with a labeled dataset of images of all kinds of wild cats — tigers, lions, leopards, bobcats and so on. The first layer may learn the different colors, the second layer may learn features like tail, nose, ears or eyes, and the third layer may identify more complex patterns, such as the lion's mane or the leopard's spots. The eventual un-layering will help the algorithm distinguish a lion from a leopard. This is an image-classification technique and represents a discriminative model, telling us what is or is not a lion. Like machine learning, deep learning also makes predictions on new data.
Generative AI, itself a subfield of deep learning, enables algorithms to generate new content like text, videos and images. Unlike the discriminative model discussed above, generative models are not classifying new data. Rather, they generate new content based on what they learn from existing data. One method of understanding generative AI is learning the mechanics of some of the most accessible and accessed AI — chatbots like ChatGPT or Bard.
The leading chatbots today are large language models that learn and match patterns in data and maintain this information in a set of numerical values. AI chatbots use a neural-network architecture called transformer that relies on the attention mechanism first introduced in Attention is All You Need by Google in 2017. This mechanism helps maintain relations between words, allowing them to influence surrounding words. This helps the model focus only on the most important parts of the input sequence.
According to Google, LLMs are "large, general purpose language models which can be pre-trained and fine-tuned for specific purposes." They are called large because they are trained on enormous datasets. To visualize just how large, consider the common crawl, a free and open repository of web data claiming to be the primary training corpus for every large language model with raw webpage data, metadata extracts and text extracts from all over the internet. It has over 240 billion pages spanning 16 years, and 3-5 billion new pages are added each month. With pages in the billions, you can only imagine just how many words LLMs are exposed to during the training process. They are also called large because of their enormous parameters. For instance, Google's PaLM has 540 billion parameters.
The pretraining phase develops the model's general knowledge about our world in a range of disciplines. For an AI chatbot, pretraining may be done through various sources of unlabeled text data, including the common crawl, on a variety of topics for the general purpose of language learning. To generate dialogue, like responses to user prompts, the model is then fine-tuned using a smaller, labeled dataset for this specific purpose.
But how does it "know" to communicate in a human-like manner? As it was trained on an enormous set of text data, the model has seen billions of examples of how humans would respond to some prompt. And it does so one word at a time through probability distribution, which calculates the likelihood of the next word given some preceding words.
For example, take the sentence, "IAPP is based in." The following next words may be probable:
- Portsmouth: 0.9
- Belgium: 0.7
- USA: 0.4
- Brussels: 0.3
The model might use the highest ranked word to complete the sentence: "IAPP is based in Portsmouth." But it does not always use the most probable word. Sometimes, because of the temperature parameter that regulates the randomness of responses, it will randomly pick a less probable word. This is why we receive a different response despite giving the same prompt. This keeps the system creative.
Governance
The target of governance is, hence, the life cycle of the AI system. From input to processes to output, risks can emerge from each component of the algorithm: the training data, the model and its parameters, and the outputs. AI governance focuses on each of these components and should be implemented throughout the life cycle of the system.
- The black-box problem: As AI algorithms are trained on datasets that are too large for human programmers to analyze, the resulting models are often complex algorithms. As such, it is not always entirely obvious what data was used to develop the model or to make certain correlations across other data points. This is the black-box problem that makes the inner workings of the algorithms less transparent. That is, it becomes rather difficult to explain or to trace how or why a prediction was made or new content was generated. Lack of transparency may also be a result of algorithms guarded by intellectual property laws.
Model cards are emerging as a best practice to making algorithms and their mechanics more explainable or transparent. These explain various aspects about a model, such as intended use, information regarding performance benchmarks across various metrics like race or gender, and other relevant details. Model cards can be beneficial for everyone involved in the development, deployment and use of AI. For example, they can help developers learn more about the system and compare it with other models, AI practitioners can learn how the system is supposed to work, and policymakers can assess impacts on society.
- Risks at the input stage: But what does it matter if the algorithm is explainable and transparent? It can be risky to have AI make predictions. Algorithmic bias is one such risk, as it reinforces societal biases and can lead to value lock-ins within a society. A famous example is the COMPAS algorithm, which predicted Black defendants were more likely to recidivate than white defendants.
However, since algorithms are a computational replication and not an embodiment of the way humans think, how can the algorithm suddenly be racist? It is not. At least not consciously. The real-world data the algorithm was trained on was reflective of racism and reinforced through the model's predictions. Data is not always objective, especially when human beings have made prior judgements on that training data.
To ensure there are no biases at the output stage, data governance is required at the input level. Such governance starts at the predesign stage. It involves collection of data in a manner that complies with data protection laws, meaning the data is accurate, representative, collected lawfully and the amount collected is minimized, which can pose challenges to the efficacy of AI and AI governance.
- Risks at the process stage: Data governance continues at the process stage. It includes data cleaning, ensuring historical data is not representative of historical biases and ensuring it does not over- or underrepresent certain groups of people. Responsibility for adhering to best data-governance practices may fall on various actors depending on the life cycle stage of the AI system.
Aside from data, the system design itself can reinforce human biases. In traditional programming, the human programmer may encode their own bias by assigning a higher risk score to an Arab man than a Western man. This one may be easy to trace. When the model is generated by AI, biases become more difficult to trace due to the complexity of the model. The variables the model is optimized for may also lead to biased outcomes. For example, say an algorithm is deciding whether your child merits admission to a prestigious school. An obvious variable to remove is race. Despite this, it seems the algorithm constantly rejects children belonging to racial minorities. This may be because the algorithm seeks out common proxies for race that were not eliminated from the data, such as postal codes or parents' income. Here, the algorithm is not consciously or intuitively being racist. It is demonstrating a systematic error, i.e., repeatedly excluding particular groups of people more than other.
One way to govern such risks is by auditing the AI system. Unlike financial audits, which may take place at the end of a fiscal year, AI audits ideally ought to be carried out throughout the life cycle of the AI system, and the specific auditing tool at each stage could vary. For instance, testing the system before it is deployed is fitting to assess its outputs and performance. This can help developers identify errors, bias, harms, accuracy in performance, alignment with intended use, cybersecurity risks, among other things. It can also make various performance indicators more interpretable.
- Risks at the output stage: In addition to giving discriminatory outputs, AI also raises the risk of disinformation through deepfakes. This risk is intensified by generative AI and is a powerful tool for those with malicious intent. It can also lead to generation of illegal content at large scale, which could make content moderation more difficult.
Moreover, generative AI can also lead to potential infringements under copyright law, as copyrighted works can make up part of the training datasets. Generative AI also increases the risk of spreading misinformation due to "hallucinations," in which the AI's response sounds grammatically correct but is factually incorrect. Hallucinations turned out to be publicly embarrassing recently for a U.S. attorney who used ChatGPT to look for precedents and submitted fake legal cases generated by the chatbot to the court.
Again, the chatbot was only mimicking human intelligence. It wasn't "aware" it had lied to the attorney, but chatbots are designed to give an output, so it checked the probability and produced one. This may put users at risk of automation bias. They may overly rely on, or favor outputs predicted or generated by AI systems. This can be dangerous in high-risk situations such as when they are used by judges making recidivism decisions or by financial institutions assigning credit scores.
Given the nature of such risks, they are better governed under law. Under the identification obligations of China's Interim Measures for the Management of Generative AI Services, providers are required to label AI-generated content so it is distinguishable from other content. For the EU AI Act, European Parliament proposed similar disclosures for content generated by AI and requires foundation models to ensure safeguards against the generation of illegal content. To protect copyright, European Parliament also proposed detailed summaries of copyrighted data used for training be publicly available.
Conclusion
An ongoing sentiment in recent years is that AI brings many benefits that need to be maximized through risk mitigation. That sentiment finds root in the need for trustworthy systems. Governance — good governance — is an agent for trust.
Governance does not mean compliance only with law and policy, although they are extremely important parts. As it may not be practical to wait for legal regulations while AI rapidly develops and risks loom large, governance can also include internal policies like audits and impact assessments, or organizational preparedness by training professionals to responsibly implement governance goals.
In such a multidisciplinary field, a bridge is needed between the technical and legal or policy perspectives, so meanings are not lost in translation when switching from one discipline to another and so governance can become more intertwined with development and deployment, rather than be seen separately.