Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

"Say not, 'I have found the truth,' but rather, 'I have found a truth." — Kahlil Gibran, On Self-Knowledge

A draft executive order has been circulating in Washington this week, part of an effort among administration officials and high-profile congressional republicans to stymie state artificial intelligence legislative efforts in the hopes of promoting innovation and “American national and economic security and dominance.”

IAPP’s Joe Duball covered the substance of the proposal, as did Tech Policy Press with a link to the draft document itself.

Although focused on the pursuit of a “minimally burdensome national standard” for AI regulation, this is not entirely a deregulatory proposal.

In part, this reality follows from the classic challenge that the federal government cannot preempt state rules with only an absence of rules. That is, you cannot preempt something with nothing. So, the draft order would direct the Special Advisor for AI and Crypto, David Sacks, to work with the White House Office of Legislative Affairs to prepare a legislative recommendation "establishing a uniform Federal regulatory framework for AI that preempts State AI laws that conflict with the policy set forth in this order."

The draft also serves as an example of why operational governance mechanisms are a part of almost any approach to AI policy, even in the current deregulatory mode. No matter which rules and standards become the norm, AI developers and deployers will need the people, policies and processes to govern their systems to within acceptable parameters.

Perhaps one of those parameters is the truth of an AI’s outputs. As perhaps its core policy objective, the draft order states emphatically that "the United States' AI regulatory framework must prioritize truth."

Truth is thus identified as the North Star of AI governance for the administration, a theme which was also apparent in the AI Action Plan. In the draft order, achieving truth is contrasted with such things as "subjective safety standards" and the production of "false results in order to avoid a 'differential treatment or impact'" on Colorado’s enumerated demographic groups, as quoted from the Colorado AI Act. 

The nature of truth is a classic philosophical question.

Over the centuries, we have come to think of truth through various theoretical lenses. Correspondence theory, for example, posits that a statement is true if it corresponds to reality. Coherence theory, on the other hand, holds that a statement is true if it is consistent with a larger system of beliefs. 

Modern philosophers take a more nuanced approach, building on these monist theories about truth by embracing pluralistic theories instead.

Pluralism holds that the property of truth is multifaceted. Different types of propositions can be true in different ways. A mathematical formula is true by virtue of coherence within a formal system. An observation about the physical world is true by virtue of correspondence to a fact. Other assertions, like an ethical statement, might be true for entirely other reasons.

Striving for truthfulness, by some measure, is certainly not a new idea among AI developers, though it is more common for programmers to focus on "accuracy" as a goal. Accuracy and truth are closely related ideas, but accuracy is a technical, statistical measure — quite different from the philosophical and perhaps messy goal of truthfulness. A model's output can be 99% accurate within the bounds of its training data but still “untrue” due to biases or missing information or context.

The AI Risk Management Framework from the National Institute of Standards and Technology discusses accuracy alongside the concept of “robustness” as part of the governance principle that AI systems should be valid and reliable: "Accuracy is defined by ISO/IEC TS 5723:2022 as 'closeness of results of observations, computations, or estimates to the true values or the values accepted as being true.' Measures of accuracy should consider computational-centric measures (e.g., false positive and false negative rates), human-AI teaming, and demonstrate external validity (generalizable beyond the training conditions)."

As generative AI systems are designed to simultaneously assist with research, creativity and decision-making across diverse domains, it is a lofty goal for their outputs to conform to any single standard of truthfulness, especially when even humans may not agree on what the truth is. With this in mind, some AI scholars and programmers continue to wrestle with new ways of bringing pluralistic notions of truth into large language models. For one example, see the value kaleidoscope project and the scholarship it cites. 

Meanwhile, we continue to expect LLMs to accurately answer questions that readily correspond to reality, like, "What is the population of Washington, D.C.?" Some truths are more self-evident than others.

Implicit in the White House's approach to truth seems to be a rejection of certain kinds of governance goals — such as those that might broadly be categorized within the principles of diversity, equity and inclusion. But truth does not emerge organically from AI systems unless and until humans intervene. It must be cultivated via the same types of governance processes that control other goals.

As the NIST framework concludes, "Validity and reliability for deployed AI systems are often assessed by ongoing testing or monitoring that confirms a system is performing as intended. Measurement of validity, accuracy, robustness, and reliability contribute to trustworthiness and should take into consideration that certain types of failures can cause greater harm."

Whether through top-down mandates or the natural need for systems to be trustworthy and useful, there is no doubt that AI governance professionals will always be wrestling with truth.

Please send feedback, updates and poems about truth to cobun@iapp.org

Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director, Washington, D.C., for the IAPP.

This article originally appeared in The Daily Dashboard and U.S. Privacy Digest, free weekly IAPP newsletters. Subscriptions to this and other IAPP newsletters can be found here.