As several global jurisdictions pursue regulations for the development of artificial intelligence, momentum continues to build in the U.S. to design the legislative foundation for AI governance practices and guardrails.

U.S. explorations include a flurry of AI policy-related activities in Washington, headlined by U.S. lawmakers holding closed-door listening sessions 13 Sept. with AI developers, Big Tech leaders and civil society groups. As a precursor to those talks, the U.S. Senate held concurrent public hearings 12 Sept., offering insight into how Congress may approach rules for AI.

In the Senate Committee on the Judiciary Subcommittee on Privacy, Technology and the Law hearing, U.S. Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., touted proposed bi-partisan "principles" to be incorporated in a forthcoming AI framework.

Per Blumenthal, the principles include creating a "licensing regime" for companies "engaged in high-risk AI development," establishing an independent AI oversight body staffed by individuals with experience using the technology to work in concert with other regulatory agencies, requiring transparency about the limits and uses of AI models in the form of watermarking AI-produced content, and subjecting AI developers to liability when their products cause consumer or civil rights-related harms.

"Make no mistake, there will be regulation, the only question is how soon and what (form)?" said Blumenthal, who announced he aims to introduce legislation regulating various aspects of AI development by the end of the year. "Risk-based rules … that is what we need to do here. We need to act with dispatch, more than just deliberate speed … we need to learn from our experience with social media."

A simultaneous hearing with the Senate Committee on Science, Commerce and Transportation sought information from witnesses on potential transparency and risk-mitigation principles that would fit into a bipartisan AI proposal, be it from Blumenthal and Hawley or other lawmakers.

Some of the questions around policing AI and its data training could be answered by comprehensive privacy legislation, Senate Commerce Committee Chair John Hickenlooper, D-Col., and fellow committee members remarked.

"There are too many open questions about what rights people have to their own data and how it's used, which is why Congress needs to pass comprehensive data privacy protections," Hickenlooper said. "This will empower consumers, creators, and help us grow our modern AI-enabled economy."

Separate hearings, similar themes

Across both public AI hearings, industry stakeholders and legal scholars further informed lawmakers on how the U.S. can best approach AI governance, and how those approaches can breathe life into standards and regulations.

At the Senate Judiciary subcommittee hearing, Microsoft President Brad Smith said he supports the framework for future AI regulations proposed by Blumenthal and Hawley because "it doesn't attempt to answer every question by design."

Smith said the basis for strong AI regulation in the U.S. should be grounded by the goals of prioritizing safety and security and establishing a federal regulatory agency to issue licenses for developing and using high-risk AI models, as part of its responsibilities. He said licensing and enforcement laws should apply not only to AI developers but also to deployers of various AI models.

"Let's couple (the Blumenthal-Hawley framework) with the right kinds of controls that will ensure safety of the sort that we've already seen … emerge in the White House commitments that were launched on July 21," Smith said. "As we go forward, let's think about the connection between the role of a central agency that will be on point for certain things, as well as the obligations that frankly will be part of the work of many agencies and, indeed, our courts."

In the Senate Commerce hearing, BSA | The Software Alliance CEO Victoria Espinel indicated member companies partaking in the AI boom are doing so with governance principles in mind, employing "very extensive" risk management programs that incorporate impact assessments. Those practices include assessing training data "to ensure it is representative of the community" and using assessment evidence "to ensure the risk of bias and discrimination is as low as possible."

 

Information Technology Industry Council Executive Vice President for Policy Rob Strayer added he's seen more innovative AI transparency practices, including "factsheets or model cards explaining features and limitations" that include "fulsome explanations."

While Espinel supported adherence to the U.S. National Institute of Standards and Technology's AI Risk Management Framework, she opined that BSA and its members do not believe the framework is the direct solution to AI regulatory needs.

"We think in order to bring clarity and predictability to the (NIST) system, and ensure the use of AI is as responsible as possible, Congress needs to require these assessments and programs. It's essential."

Senate Judiciary witnesses were generally in agreement on ensuring individuals have the right to know if they are interacting with an AI system or AI-generated content. Smith said Microsoft is working with counterparts in the industry to promote its proposed "Authentication of Media" via a provenance system that creates a unique signature for authentic content by "stamping" the content by the physical device that generated it.

NVIDIA Chief Scientist and Senior Vice President of Research William Dally said Microsoft's AMP effort could prove easier to regulate versus requiring AI-generated content to be watermarked in some fashion. However, he said both provenance techniques to verify authentic content and a regulatory scheme requiring AI-produced content to be watermarked is the best solution to overcoming the public's inability to discern real from fake as AI-generated content becomes more sophisticated.

"Those two technologies in combination can really help people to sort out (what is real and fake), along with a certain amount of public education to make sure people understand what the technology is capable of and are on guard for that."

Boston University School of Law Professor Woodrow Hartzog urged Senate Judiciary members to avoid pursuit of "half measures" for AI regulations, such as pursuing "industry-led approaches," promoting "transparency, mitigating bias and promoting principles of ethics." His recommendations included lawmakers accepting "AI is not a neutral technology" and imposing design-rule requirements, as well as being bold enough to question whether certain AI systems should be allowed to be developed in the first place, then move to ban them outright if they are deemed too dangerous.

Where privacy, AI meet

Senate Commerce members were candid about where a federal privacy law fits in with AI rules, while also warning against leaving privacy behind to tackle AI.

Sen. Jerry Moran, R-Kan., is all-in on AI regulation but was discouraged to see it take priority over privacy. He said, "It's a bit annoying that we are here now on AI when we've been unsuccessful on reaching conclusions on data privacy legislation. Just seems like one issue piles up after another, both of huge significance."

Commerce Committee Ranking Member Marsha Blackburn, R-Tenn., put out a call for privacy, among other issues, to not be lost in the "hyper-focus on AI." She added, "For a decade I have worked to bring about a comprehensive data privacy law. … That is something that should be first step. It's vital that my colleagues keep in mind that need as we look into AI."

There have been discussions in the U.S. House about the reintroduction of the proposed American Data Privacy Protection Act after movement in 2022, but a new draft has yet to materialize. The Senate is focusing its current data privacy endeavors on children's privacy and online safety, with a pair of bills now available for full consideration.

Senate Judiciary witnesses fell in line with committee members' sentiments. The ITIC's Strayer detailed federal privacy regulation is "absolutely critical" in the context of safeguarding training data. "It doesn't need to be done first before moving on to AI regulation," he said, "but both have to be done."