TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Spilling the tea on AI accountability: An analysis of NTIA stakeholder comments Related reading: Key Terms for AI Governance

rss_feed
5, 10

Friday afternoons are generally not a time for significant news to break, but when leading executives at seven major U.S. artificial intelligence companies recently met with President Joe Biden at the White House to adopt a commitment toward a shared voluntary framework governing new standards for privacy, security and accountability in the development and deployment of powerful AI, it was a big deal.

Gathering for more than just a photo op, the companies agreed to meaningful, proactive protective measures and stronger practices intended to manage the immediate risks of their products, including independent testing of their systems, sharing information with government officials and civil society, and watermarking AI-generated content to combat disinformation.

This voluntary commitment represents an important step in the journey to mitigating AI-associated risk and reaffirms that industry self-regulation is a viable solution for such a dynamic and nascent space.

While it may appear to be a nonbinding commitment at face value, the commitment — as a representation in commerce — is also enforceable by the Federal Trade Commission and similar state authorities that enforce laws on unfair and deceptive acts.

This newsworthy commitment from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI was the most recent addition to an AI regulatory ecosystem shaped by the existing principles of stakeholder involvement, voluntary governance and independent accountability. In April, the National Telecommunications and Information Administration issued a request for comments to gather stakeholder feedback on AI accountability measures and policies to assist in the crafting of a report on AI accountability policy and the AI assurance regime.

Nearly 200 organizations, including BBB National Programs, and thousands of individuals responded to the NTIA solicitation. Our response focused on two key aspects. The first is an "ideal checklist" of characteristics companies should incorporate into a certification or accountability mechanism. The second highlights best practices gleaned from third-party privacy accountability programs with longstanding history, trust, and commitment to the marketplace.

To understand areas of consensus in the public discourse, including perspectives similar and distinct from our own, we recently pulled a sample of the responses representing industry, consumer and civil society perspectives on AI accountability, and then analyzed and distilled that information into a summary to shed light on what stakeholders value for the future of AI regulation. Here is what we found:

Overwhelming consensus:

  • Give us regulatory consistency! In lieu of an international regulatory patchwork, participants strongly expressed views in favor of cooperation with U.S. allies and trade partners to achieve some degree of consistency in the legal treatment of AI products, though disagreement exists over the proper division and devolution of regulatory power.
  • Self-regulation is preferable to government standard-setting. Participants emphasized any effort by public regulatory bodies to dictate best practices and technical standards will be quickly outdated, limiting the growth, development and implementation of more effective safety and privacy measures. Self-regulatory bodies have a greater capacity for dynamic, flexible regulation than their government counterparts.
  • We <3 the NIST RMF. Industry groups extolled the cooperative stakeholder approach utilized in drafting the Risk Management Framework produced by the National Institute of Standards and Technology through a stakeholder-led process. Civil society organizations hailed the guidelines as a promising first step and recommended further efforts be built upon this framework. Additionally, nearly all groups expressed a general backing for the framework of protections against discrimination included in the AI Bill of Rights, while differing on various specifics.
  • Focus to limit any overly broad definitions of AI that can chill innovation. All groups agree, in forthcoming AI regulation, a sufficiently narrow definition of AI should not encompass low-risk uses that have long been understood not to require regulatory scrutiny.
  • Build regulatory expertise within the federal government. All commenters agreed the areas of the federal government responsible for AI regulation should prioritize acquisition of talent and the development of expertise within their ranks.
  • Ensure robust investment in a trained AI governance workforce. Commenters highlighted the urgent need for people who are qualified to undertake the multidisciplinary work of AI governance. Organizations of all types need trained professionals running the gamut on roles to support the exponential growth of AI-powered systems, particularly as these best practices are still being shaped over time. Commenters pointed to professions in the privacy, trust and safety sectors as models for growing this new profession, and to existing tools like assessments, audits and certifications as models for equipping and quickly upskilling the workforce, once adapted to this new domain.
  • Create the National Artificial Intelligence Research Resource. The NAIRR would leverage the research prowess of the federal government to support the ethical development of AI. This initiative predictably received overwhelming support.
  • Provide clarity to the distribution of accountability along the AI value chain. Though commenters disagreed on the specifics, almost all commenters agreed regulators must provide clear guidance detailing which responsibilities fall to each of the many members of the productive process that develops, markets, deploys and utilizes AI.
  • Enact a comprehensive federal privacy law. While such a law is not essential to the creation of an AI regulatory regime, the consensus was it would bring clarity to the many interactions with personal data at every step in the AI value chain.
  • NTIA should spearhead taxonomy-based research to shape standards. Commenters propose that the NTIA create a taxonomy of AI systems for future use by regulators and courts, hold workshops for the development of legal and technical standards governing AI, and prepare to leverage technical expertise to advise AI regulators from agencies across the government.
  • Create a national registry of high-risk AI systems. Such a database would allow the disclosure of internal audit results to regulators, capture the security processes of high-risk developers and ensure audits are acted upon when appropriate. This registry would also be of use to third-party auditors in target identification.

Some disagreement:

  • Introduce and streamline new industry standards, assessments, audits and related practices. Commenters agreed impact assessments are a useful tool to ascertain relevant risks using a process the industry is already familiar with, steeped in state consumer privacy law requirements. Industry groups are generally resistant to "mandates" for tools they deem helpful, usually preferring voluntary approaches, but regulatory efforts to require impact assessments would likely meet little to no pushback.
  • Utilize internal and external audits as a means to demonstrate AI accountability. These tools can ensure technical functionality, ease legal compliance, and guard against civil rights violations and associated harms. Discord arose over whether such measures should be mandatory or voluntary, and whether required audits should be internal or external.
  • Strengthen the NIST RMF. All groups concurred future regulation should be built on the foundation of the NIST framework but disagreed over whether the government should use soft law to encourage adoption, e.g., leverage government procurement through a self-attestation system, or give the framework teeth through binding laws or regulations.
  • License high-risk AI systems. Some civil society groups supported broad licensing for high-risk models, but others warned this requirement could create huge barriers to entry that diminish competition and gift existing players enormous market power. These groups recommend licensing be pursued only in specific cases, such as for the procurement and use of military weapons. Many leaders in generative AI development also supported a federal licensing regime, in keeping with the predictions of groups worried about the competitive effects.
  • Create an AI regulatory infrastructure. All commenters agreed regulation should be the product of some combination of central AI regulators and federal agencies with existing jurisdictional authority, such as the FTC, Department of Justice, Consumer Financial Protection Bureau and Equal Employment Opportunity Commission. Some groups endorsed the creation of a powerful central AI regulator with a peripheral role for existing agencies within their jurisdictions, while other argued existing agencies are best positioned to regulate the vast technology and can do so within existing law. An emergent middle path seemed to favor altering jurisdictional powers where necessary to allow existing agencies to form the front line against the misuse of AI in industry, giving one existing agency an expanded general AI regulator role as a failsafe, and expanding NIST"s capacity to advise, provide technical expertise and develop standards.

The productive discussion, visible across the responses to the NTIA's request for comment, is emblematic of the collaborative process that defined previous U.S. AI development in principle and its path forward toward opportunities for regulation. With last week's announcement at the White House, these comments lend credence to the hope that further efforts will build upon this cooperation to minimize the possible risks emerging from AI, while protecting and encouraging its significant promise. 

Editor's Note:

The views expressed by the authors are their own and not necessarily the view of the IAPP.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.