Members of the financial and housing services industry told a U.S. congressional committee they prefer a targeted approach to artificial intelligence regulation instead of a more sweeping approach like the EU AI Act.

The comments, offered during a House Committee on Financial Services hearing 23 July, illustrate the anxiety some companies are feeling as the landmark EU AI Act is set to take effect, while the U.S. has yet to produce any firm legislation of its own. Representatives from the lending, housing, stock market and AI sectors predominately said they would like the U.S. to lean more on existing regulations and provide targeted instances of clarity, but to also provide a uniform floor to prevent a state-created patchwork.

Both the housing and financial services have been using various types of AI for decades, according to Scale AI Chief Technology Officer Vijay Karunamurthy. He told committee members the early adoption shows the technology can have wide-ranging benefits when used safely, such as when a model trained on proprietary data is used to help develop customer insights.

“New regulations may not be necessary; however, a thorough and comprehensive gap analysis should be conducted to confirm whether or not (existing) regulations should apply,” Karunamurthy said.

If they do apply, Karunamurthy said the U.S. should create risk-based, industry-targeted legislation, and it should be up to the federal government to deploy standards for industry to adhere to.

Lawmakers and witnesses sought to draw distinctions between that approach and the one taken in Europe. While the AI Act has been hailed as the first major AI regulation globally, the push to finish the landmark legislation before the EU parliamentary elections was criticized as "rushed" and harmful to innovation, even as the European Commission allows members of the technology industry to write its codes of conduct.

The act is not the only area causing AI friction. Concerns over how the Digital Markets Act regulates so-called "gatekeeper" companies had led to Apple withholding some of its AI features in EU markets. And Meta, frustrated by regulators' scrutiny of its plan to use customer data to train its models running up against the EU General Data Protection Regulation said it will hold back on introducing more multimodal products in the EU for the future. Both companies cited uncertainty from regulators.

U.S. lawmakers appear keen to avoid any level of business uncertainty and to keep a U.S. competitive edge in the AI market. House Financial Services Committee co-Chair Patrick McHenry, R-N.C., indicated it is "far better that we get this right rather than to be first" on regulation while noting "we cannot allow the fear of the unknown thwart the U.S.'s role as the hub of technological innovation."

McHenry added there needs to be more clarity around how the use of generative AI does not absolve companies of their responsibility to adhere to anti-discrimination laws. But for other uses, he said Congress "should be leery of rushing legislation."

National Fair Housing Alliance President and CEO Lisa Rice argued there are certain uses of AI where discrimination has been proven and should be acted upon quickly. She pointed to instances where participants in the federal Housing Choice Voucher program were denied applications through an AI chatbot and said dynamic pricing models used in rental rate applications can be inconsistent.

"So we have to have tight controls on these types of technologies that can disproportionately discriminate against survivors of domestic violence, who are disproportionately women, people of color people who have disabilities and other protected classes," Rice said.

Frederick Reynolds, FIS Global's deputy general counsel for regulatory legal and chief compliance officer, said the EU's approach to AI has been restrictive. But he also called for the U.S. to adopt a uniform regulation to prevent a patchwork of state laws, like what has occurred on the data privacy front.

The risk of a state patchwork is already taking shape through standalone bills and updates to state consumer protection laws.

Colorado and Utah passed broad AI laws during their respective 2024 legislative sessions, while other states introduced draft legislation that may return and pass in future sessions. In states such as California, regulators are torn between creating protections on the level of the EU AI Act and allaying concerns around stifling the economy.

Caitlin Andrews is a staff writer for the IAPP.