TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

United States Privacy Digest | A view from DC: All the president’s ministers — an all-of-government AI response Related reading: A view from DC: FCC geolocation orders show privacy's lost waypoint

rss_feed

""

""

The world of artificial intelligence governance changed this week. U.S. President Joe Biden set in motion so many government workstreams at once that it will take some time to fully understand their expected impacts.

With the release of Executive Order 14110, the president made good on the hype that for months had swirled around the order. The White House embraced an all-of-government approach to responding to the potential risks of AI — or at least the most egregious safety and security risks — while also laying the groundwork to ensure the U.S. will continue to lead on the development and use of AI technology. As the IAPP reported, privacy is a key theme of the order 14110, alongside the mention of other pressing equities.

The Office of Management and Budget, the most powerful agency that most people have never encountered, released a draft update to the rules governing the use of AI by all other federal agencies. It does not include national security agencies, which are governed by other workstreams under the order.

The emerging profession of AI governance is central to OMB's proposed guidance. Chief AI officers will become a required fixture of agencies in the near term and the OMB guidance will dictate the "roles, responsibilities, seniority, position, and reporting structures for agency CAIOs." They must be "positioned highly enough to engage regularly with other agency leadership, to include the Deputy Secretary or equivalent." Among the CAIO's responsibilities will be to build a governance and risk-management program for AI systems within their agency.

Under the proposed OMB guidance, as well as Section 7225 of the Advancing American AI Act, agencies will also begin publicly posting annual "AI Use Case Inventories." These could serve as a model for transparency and notice about the use of AI systems and associated risk management.

Agencies must ensure that each "AI's entry in the use case inventory serves as adequately detailed and generally accessible documentation of the system's functionality that provides public notice of the AI to its users and the general public. Where practicable, agencies should include this documentation or link to it in contexts where people will interact with or be impacted by the AI."

Releasing the draft for public comment may seem like an unusual step. OMB consults in depth with federal agencies before releasing rules but, historically, the agency has not focused on public participation in its governing process. Perhaps this is an attempt by the agency to lead by example, after the Biden administration's efforts to modernize regulatory review by improving public participation by federal agencies.

OMB's process launches the first of many forthcoming comment opportunities on the U.S. approach to governing AI systems. Interested parties have until 5 Dec. to submit comments to the agency and have their voices heard.

The U.S. Department of Commerce will play a key role in implementing the order. As the agency said in a statement, the roles of its sub-agencies combine "sophisticated standards and evaluation capabilities with a robust combination of reporting requirements and voluntary measures." But other agencies across the government are implicated, almost without exception, as explained in depth in this analysis from Wiley.

For starters, the U.S. National Institute of Standards and Technology has plenty of work to do. The NIST has been tasked with building new guidance and frameworks related to generative AI, auditing, watermarking and red-teaming.

In a related well-timed development, NIST also announced this week a call for participants in a new consortium to support development of innovative methods for evaluating AI systems to improve safety and trustworthiness.

The red-teaming guidance will serve an important role in one of the most dramatic components of the order, which requires companies to provide the federal government with disclosures when developing or "demonstrating an intent to develop" AI systems that qualify as "dual-use foundation models," and to subject such systems to red-teaming tests.

The order defines these models narrowly and in such a manner that current systems are likely excluded, but this requirement will be important as AI innovation continues to advance.

A dual-use foundation model is any "AI model that is: Trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

  • Substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear weapons.

The definition continues by clarifying that, "models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities."

Although the new requirements for most advanced systems may be the most exciting, the broad governance structures for a wider range of AI systems are likely to have an even greater impact on future norms and best practices. Although some new requirements apply broadly to AI systems, additional requirements in the OMB guidance apply only to "safety-impacting or rights-impacting AI."

The requirements for these systems read like a laundry list of the emerging best practices for AI governance. Once in effect, before a new AI system can be deployed or an existing one used, agencies will need to complete:

  • Impact assessments.
  • Testing AI performance in a real-world context.
  • Evaluations by an independent reviewing authority to ensure that the system works appropriately and as intended, and that its expected benefits outweigh its potential risks.

On an ongoing basis, for safety or rights-impacting AI, agencies will also be required to complete:

  • Ongoing monitoring and thresholds for periodic human review.
  • Iterative mitigations of emerging risks.
  • Periodic and use-case specific "training, assessment, and oversight for operators of the AI to interpret and act on the AI's output, combat any human-machine teaming issues (such as automation bias), and ensure the human-based components of the system effectively manage risks from the use of AI."
  • "Appropriate human consideration and accountability."
  • Public notice and plain-language documentation through the AI use case inventory.

But wait, there's more! For rights-impacting AI systems — but not safety-impacting systems — agencies must also:

  • Take steps to ensure that the AI will advance equity, dignity and fairness.
  • Consult and incorporate feedback from affected groups.
  • Conduct ongoing monitoring and mitigation for AI-enabled discrimination.
  • Notify negatively affected individuals.
  • Maintain human consideration and remedy processes.
  • Maintain options to opt-out "where practicable."

The embrace of these practices by the U.S. government will serve a definitive role in coalescing the governance practices that are expected for AI systems. Though it was apparent before the order and OMB guidance, it is unavoidable now: The next few years will build the foundation of the profession of AI governance in a way that will define the field for decades.

U.S. leadership is only part of the equation, as was apparent at IAPP's inaugural AI Governance Global event this week, which took place at the same time as the historic U.K. AI Safety Summit and Bletchley Declaration.

While participants were learning from responsible AI practitioners, government stakeholders and others, we were greeted on the sidelines by continuous news of bilateral and multilateral developments. The U.K. and U.S. each announced the launch of an AI Safety Institute. The G7 adopted two sets of principles for generative AI and foundation models out of its Hiroshima AI process. And the OECD released its fourth-year report on the implementation of its AI principles.

Collective action can be messy, but the increasing harmony between the efforts to define guardrails around AI should give us hope.

It's time to buckle up for AI safety. There is a long exciting road ahead.

Upcoming happenings:

  • 7 Nov., 12:00 ET: Future of Privacy Forum hosts a webinar on the current state of kids' and teens' privacy.
  • 8 Nov., 11:00 ET: IAPP hosts a webinar on foundations for an effective AI governance program.
  • 9 Nov., 14:00 ET: Future of Privacy Forum hosts a virtual panel on immersive technology and AI.
  • 15 Nov., 14:00 ET: New America's Open Technology Institute hosts a virtual discussion on "The Intersection of Federal Privacy Legislation & AI Governance."

Please send feedback, updates and dual-use models to cobun@iapp.org.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.