U.S. Congress has yet to pass a comprehensive artificial intelligence bill as other countries establish frameworks of their own. That is making some members of the U.S. House of Representatives anxious as they try to get back to business on drafting legislation this fall.
Members from the House Committee on Science, Space and Technology held a joint hearing 18 Oct. to ask business leaders and academics how the U.S. should approach the risks associated with AI, a topic that touches on anything from personal data security to equity in algorithmic decision-making. The hearing was one of two the House conducted 18 Oct. as a Committee on Energy and Commerce subcommittee focused its attention on the data privacy implications of AI.
U.S. President Joe Biden and the U.S. Senate have been busy with AI work of their own, while House business has recently come to a standstill as members focused on government funding bills and the election of a new House speaker.
The White House previously secured voluntary AI governance commitments from big-name AI developers, including Google and OpenAI, ahead of an expected executive order on AI in the coming weeks. Meanwhile, the Senate has held a slew of public hearings in addition to its ongoing "AI Insight Forums."
"This is a great day to be having this hearing because we are working together in a bipartisan way, honestly, to try and find solutions to challenges that face our society," said U.S. Rep. Zoe Lofgren, D-Calif., during her opening statement.
Preemption top of mind
Without a federal benchmark in place, states are proposing their own AI laws. Michael Kratsios, managing director of Scale AI and former U.S. chief technology officer, pointed to California specifically.
Gov. Gavin Newsom, D-Calif., issued an executive order requiring government agencies to study AI developments, uses and risks. California state lawmakers have also introduced a bill that would set transparency, liability and security standards for systems.
Kratsios said the U.S. needs to preempt such laws so companies can have security when developing products.
"We've seen when patchworks do exist, ultimately, you're not able to see the quick proliferation of technologies ... across the country," he said.
U.S. Rep. Jay Obernolte, R-Calif., noted that states have already passed dozens of different privacy laws, which he said has caused complications for small businesses. Given those challenges, he asked Elham Tabassi, the associate director for emerging technologies at the U.S. National Institute of Standards and Technology, if it would make more sense for a regulator to learn everything about AI or be taught by business about best practices.
Tabassi was quick to say NIST is a nonregulatory agency. But she said agencies like the NIST can set metrics, provide test environments and evaluation standards for AI functionalities and trustworthiness. Those elements are crucial to measuring AI performance and improving its systems, regardless of the regulatory or policy landscape, she said.
Multiple competitive scopes
Interwoven throughout the hearing were concerns about competition at the international level. Lawmakers were especially focused on China and its heavy investments in research and regulation around AI.
The U.S. recently tightened limitations on sales of advanced semiconductors that are critical to powering advanced AI systems to China. U.S. President Joe Biden's administration discussed how such limits are meant to protect the U.S. against security threats, but the sales could also restrict Chinese companies' ability to develop AI-focused technology.
But Institute for Progress co-CEO Caleb Watney said the U.S. has other tools at its disposal for competition. He said the U.S. has an opportunity to become a leader in cybersecurity standards, as well as leverage its partnerships with other democratic countries.
"We have always been the home for global talent from all around the world. And that's enabled us to kind of be a pivotal leader in innovation," Watney said.
Bias risks, ethical solutions
How the government should approach the ethics of risk management was another focus of the meeting. Rep. Suzanne Bonamici, D-Ore., noted a Stanford University Institute for Human-Centered Intelligence study that detailed how flawed data collection can hurt low-income borrowers when creditors rely on algorithms to calculate risk.
She asked Emily Bender, a professor of linguistics at the University of Washington, how to reduce biases that would lead to such results.
"I think first we have to recognize there's no such thing as an unbiased training data set," Bender said.
But there are ways to lessen those biases, she added, such as standards for documenting how AI systems are trained.
"Because that allows us to ask questions like, 'Is this trained on a set of data that would lead us to suggest, to want the patterns in that training data,'" Bender said.
If you want to comment on this post, you need to login.