TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | US House subcommittee dives into privacy, AI legislative recommendations Related reading: Bipartisan consensus in US privacy lawmaking

rss_feed

A sense of inevitable symbiosis between privacy and artificial intelligence is coming to a head with U.S. lawmakers. A dedicated AI hearing before a subcommittee of the U.S. House Committee on Energy and Commerce signaled comprehensive federal privacy legislation is still an urgent matter for Congress, but especially in the AI context.  

Members of the subcommittee, featuring American Data Privacy and Protection Act co-sponsors Cathy McMorris Rodgers, R-Wash., and Frank Pallone, D-N.J., pressed several witnesses on how future AI regulations should be developed. However, the conversation primarily reverted to the need to pass a comprehensive privacy law, such as the ADPPA, before meaningful AI legislation can be crafted.

"I strongly believe that the bedrock of any AI regulation must be privacy legislation that includes data minimization and algorithmic accountability principles," Pallone said. "Clearly defined rules are critical to protect consumers from existing harmful data collection practices and to safeguard them from the growing privacy threat that AI models pose."

The hearing was temporarily interrupted by a House floor vote to elect a new Speaker of the House, which along with government funding debate has slowed House business since returning from its August recess.

Before current prioritizations, the ADPPA hadn't gained traction in the House during the 118th Congress. In 2022, the House Energy and Commerce committee had a near unanimous vote to send the ADPPA to the House floor, where the bill then sat for the remainder of the legislative session.

Privacy still a bipartisan concern

Last year's 50-2 Energy and Commerce Committee vote on the ADPPA showed House Democrats and Republicans finally had a privacy bill they could come together on. Wednesday's hearing demonstrated the bipartisan momentum that still exists to develop federal privacy and, eventually, AI legislation.

"It is critical that Congress create an environment to foster innovative uses of (AI) technology while also protecting Americans from possible harms," U.S. Rep. Larry Bushon, R-Ind., said. "Enacting a national data privacy framework, such as the one we passed through this committee last year with ADPPA, to establish clear rules of the road for the U.S. is a key factor in deploying effective AI that will help us innovators keep their edge against competitors abroad."

Each of the subcommittee's witnesses reinforced the need to pass federal privacy legislation and supported the objectives of the ADPPA as previously drafted.  

AI Now Institute Executive Director Amba Kak discussed how the existing notice and consent mechanisms deployed by Big Tech companies online are insufficient to protect U.S. consumers’ personal information, both presently and in the context of increasingly more sophisticated AI models being deployed in the future.

"We're seeing new privacy threats emerge from AI systems; we talked about the future threats and how we don't know where AI is going to go, but we absolutely do know what harms they're already causing," Kak said. "Unless we have rules of the road, we're going to end up in a situation where this kind of state of play against consumers is entrenched."

Kak added the problem is three-fold in the form of privacy risks, competition risks brought by "unchecked commercial surveillance" and national security risks, due to the sheer volume of unnecessary personal data collected by companies that create a "honeypot" for cybercriminals. She said the personal data protections offered in the ADPPA, such as data minimization requirements, as well as existing regulatory powers would serve as adequate baseline protections against AI-related harms without necessarily needing to pass stand-alone AI legislation.

Privacy before AI a must

Other witnesses generally agreed that a federal privacy law with similar requirements spelled out in the ADPPA is vitally necessary. Although others went slightly further than Kak and called for Congress to pass AI-specific laws once they establish a federal consumer privacy standard.

Former U.S. Federal Trade Commission Chair Jon Leibowitz indicated a comprehensive AI law will be necessary eventually but could be several years away from materializing.

Leibowitz said a federal privacy law, however, needs to precipitate a federal AI law and warned against letting the states go ahead and create their own "patchwork" laws to grapple with AI advances without a federal privacy law in place. The U.S. is already seeing a proliferation of state comprehensive privacy laws with 13 states going ahead with their own statutes in lieu federal lawmakers' inertia in passing a national privacy standard.

"We live in an era in which data is incessantly collected, shared, used and monetized in ways never contemplated by consumers themselves," Leibowitz said, pointing to perceived shortcomings on data minimization and sensitive data collection limits in the California Consumer Privacy Act. "AI has amplified these disturbing trends because consumers have so little control over their personal data, and it is shared at-will by companies that can deploy AI so perniciously."

A federal privacy law may give regulatory authorities such as the FTC additional legal footing to address negative impacts of defective AI models endured by consumers. Ultimately, however, Leibowitz said Congress will need to devise a comprehensive AI law regardless.

"Some large companies have developed ethical approaches to the use of AI, but most businesses are looking for direction." Leibowitz said. "And unfortunately, they're not going to get too much direction from existing laws and regulatory authorities, which are not an adequate match for the problems created by misuse of AI."

Substance of AI impact assessments a matter of debate

There was some daylight between witnesses regarding what any AI law's requirements should mandate.

One key area where views diverged between witnesses was the potential need for independent impact assessments conducted on new AI models. At one point, Rep. Jan Schakowsky, D-Iowa, questioned BSA President and CEO Victoria Espinel and AI Now Institute's Kak on how such impact assessments should be certified.

Schakowsky said she supported the Algorithmic Accountability Act and was "glad" to see many of its provisions folded into the ADPPA last year to prevent against "life-altering opportunities" in health care, housing and employment, for instance, being made by "untested and biased" AI systems.

Espinel said enterprise-level technology companies have employed strategies such as red-teaming and conducting impact assessments "at every step" of AI development to ensure "companies are acting responsibly." While she said she supported legal requirements mandating companies creating high-risk AI systems to conduct impact assessments, she disagreed that the assessments need to be done by third party auditors, of which she said there is currently "no accredited body" and a lack of a "commonly-agreed standard" to conduct such an audit.

"I like to think of the impact assessment as a report card that measures the intent of the system … if the report card comes back, and it shows that there are problems, then the companies are going to have steps they can take to address them," Espinel said. "There are some pieces that are missing right now, in order to have a system of third-party audits that would work effectively at the moment."

In response to Schakowsky's questioning, Kak said the future impact assessment regime for high-risk AI systems should require assessments being made public prior to a given model’s release for consumer use. She also said the performance metrics comprising an impact assessment, however they are specified in a bill, cannot be developed solely by AI developers out of risk of "industry capture."

"While we are absolutely in favor of impact assessments, and that companies should be of course evaluating risks, we're worried about a situation where companies are essentially grading their own homework," Kak said. "There is a very high risk of industry capture when it comes to auditing standards, so we need to make sure that the terms of the debate are set by the public and not the industry."

Comments

If you want to comment on this post, you need to login.