TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

United States Privacy Digest | A view from DC: The fight over defining 'AI' is far from over Related reading: A view from DC: FTC v. Kochava – Never say never again

rss_feed

""

""

Defining a term is a powerful act. For centuries, mystics and philosophers have been fascinated by the concept of a "true name," a word that expresses the essential nature of a thing so accurately that it somehow embodies its power. According to Confucius, calling something by its proper name is the "beginning of wisdom."

Entire fantasy worlds have been built around this idea. As Patrick Rothfuss writes in The Name of the Wind: "Words are pale shadows of forgotten names. As names have power, words have power. Words can light fires in the minds of men … But a word is nothing but a painting of a fire. A name is the fire itself."

Writing a definition into law is much like the magical art of naming. The law, like magic, speaks meaning into reality in a way that no mere words can. But the law is expressed in words, which must be used precisely to have any meaning. When words, or amorphous concepts like "artificial intelligence" can have multiple meanings, laws define them for us. In so doing, they provide a scope for the application of their rules. But they also shape language, meaning and understanding. 

In many ways, definitions create their own reality.

The policy discourse around AI systems is rife with imprecision and inconsistency. At times, discussions of AI risks seem concerned only with a future Kurzweilian singularity. At other times, AI systems are described in ways that instead include the most basic digital algorithms, like a spreadsheet's sort function. IAPP's outgoing Westin Fellow, Amy Olivero, understood the delicacy of the assignment when she undertook to map international definitions of AI. The resource makes clear that no one standard has yet emerged for the scope of AI rules, though there are many consistent patterns.

As the definitions of AI have generally broadened — at least outside of the context of specialized sectoral rules — it has also become clear to policymakers that AI requirements must differentiate between high-risk and low-risk systems. Though tools like risk assessments will serve as a best practice for many AI and machine learning systems, more enhanced safeguards like third-party accountability mechanisms will be required only for those systems that pose the highest risks.

But defining and designating high-risk systems has turned out to be a fraught exercise for policymakers. For example, until generative AI entered the policy arena with a bang this year, the drafters of the EU AI Act did not list generative AI models among the types of de-facto high risk systems under their proposed framework. After public interest in the risks of generative AI reached a peak, they made the controversial decision to include it in the list. 

Fine-tuning the scoping criteria for high-risk systems has remained one of the most hotly contested elements of the AI Act negotiations. Most recently, drafts have included what some call "horizontal exemption conditions" that would exclude AI systems from any high-risk category if they meet limited purpose-based exemptions. 

Definitions beget exemptions beget more definitions. Each new proposed legal code serves as a definitional volley in the ongoing battle to properly scope the common understanding of AI in a manner that right-sizes mandatory safeguards with AI's multivariate risks.

Executive Order 14110 plants an important flag in this race. For the definition of AI, the executive order refers to one of the few codified definitions that have emerged in the U.S. in recent years, originating in the National AI Initiative Act of 2020, which was incorporated into the must-pass National Defense Authorization Act of 2021. The definition appears as follows: "The term 'artificial intelligence' means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to—

(A) perceive real and virtual environments;

(B) abstract such perceptions into models through analysis in an automated manner; and

(C) use model inference to formulate options for information or action."

Such a definition is like candy for lawyers. It makes use of two lists, the first disjunctive — "AI" makes either predictions, recommendations or decisions — and the second conjunctive — an "AI system" uses inputs to do all of the listed actions. Would a machine-based system that does not "perceive" both real and virtual environments meet this definition? Or one that makes predictions based on machine-defined objectives?

Draft definitions in the EU AI Act have also focused on the fact that AI influences the environment around it by "generating outputs such as content, predictions, recommendations, or decisions." But the AI Act would make reference to a defined set of software "techniques and approaches" such as machine learning, that could be updated over time to incorporate new technologies. 

Despite the granularity of the U.S. definition, it is still understood to be a relatively broad one. So, much like in the EU, Executive Order 14110 goes on to introduce additional definitions for those high-risk AI systems subject to more stringent requirements. The most important of these is the concept of a "dual-use foundation model," so named because it must be capable of performing tasks across multiple high-risk contexts. You can read the definition in Section 3(k) of the executive order, but suffice to say it is a complex one, with a five-part conjunctive test, plus the listed high risks, plus a handful of powerful modifying words that could lead to legal disagreement about its scope (e.g., "broad," "generally," "wide range," "easily," and "serious").

The accompanying draft memo from the Office of Management and Budget also introduces new definitions of high-risk systems ("rights-impacting and safety-impacting AI"), which may end up being more operational for AI governance professionals. The definitions of these terms even include detailed lists of purposes that are presumed to be rights- or safety-impacting. For example, a system used for the purpose of "detecting or measuring emotions, thought, or deception in humans" is presumed to be rights-impacting, unless the relevant Chief AI Officer makes a documented determination otherwise.

Because we live in the time of AI policymaking, these examples are not even the most recent definitional exercises to examine. This week, Sens. John Thune, R-S.D., and Amy Klobuchar, D-Minn., introduced a rare bipartisan bill on AI accountability and transparency that includes a two-tiered transparency scheme for high-risk systems. The bill features its own definition of AI system, as well as new high-risk terminology: "critical-impact AI system" and "high-impact AI system." The latter of these would apply to an AI system "specifically developed with the intended purpose of making decisions that have a legal or similarly significant effect on the access of an individual to housing, employment, credit, education, healthcare, or insurance in a manner that poses a significant risk to rights afforded under the Constitution of the United States or safety."

Rights … or safety. Hear the echoes of rights-impacting and safety-impacting? Perhaps we are already beginning to see convergence of policy ideas around AI risk.

Nevertheless, the project of moving policy discourse to a more precise and consistent understanding of AI is ongoing. Its importance cannot be overstated. After all, like a meaningless word, a law that does not apply to anything has no power at all.

Here's what else I'm thinking about:

  • Voice cloning solutions are needed. The U.S. Federal Trade Commission announced a new Voice Cloning Challenge, seeking ideas to reduce the harms of synthetic voice production technologies that "administrable, increase company responsibility and reduce consumer burden, and are resilient to rapid technological change." I wrote previously about the privacy risks of these technologies and U.S. efforts to combat them. You can submit your ideas between 2-12 Jan. 2024.
  • Is it fraudulent for a CISO to sign an inaccurate risk statement? A recent action against SolarWinds by the Securities and Exchange Commission charged the company's chief information security officer with "fraud and internal control failures relating to allegedly known cybersecurity risks and vulnerabilities." Chief privacy officers are wondering whether similar logic could apply to them. As one of my favorite columnists Matt Levine so often points out, "anything bad that is done by or happens to a public company is also securities fraud."
  • Debate is heating up about FISA reform as reauthorization deadline nears. Competing legislative proposals are directly on the agenda before the end of the year, whether through direct reauthorization or reform. The IAPP and Wired separately published new analysis of the debate. Cameron Kerry wrote an article in Lawfare on why FISA reauthorization should codify safeguards for non-U.S. persons.
  • Meanwhile, stakeholders weigh in on the U.S. policy shift on digital trade. The thoughtful reviews by Alex Joel in Lawfare and Jennifer Brody for Tech Policy Press are both worth reading as this seemingly wonky issue can have long-lasting global significance for internet governance.

Please send feedback, updates and true names to cobun@iapp.org


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.