Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
It was a perfect spring day outside the Hart Senate Office Building yesterday. A series of much-needed rainstorms had brought Washington, D.C., back to the blinding botanical glow that usually permeates the city this time of year. The dogwoods were in full bloom.
Inside, it was also springtime for artificial intelligence innovation. The U.S. Senate Committee on Commerce, Science and Transportation was hosting a hearing on "winning the AI race" featuring high-profile witnesses from across the AI supply chain, including infrastructure, hardware and software companies.
The full hearing is worth watching, perhaps more so than many other hearings, as it includes frank discussions across a wide range of AI governance issues and many signals from Senators about how their thinking has evolved — and diverged — in recent years.
U.S. Sen. Ted Cruz, R-Texas, did not mince words as he called the committee to order. The committee chair's opening statement presented a vision for the future of U.S. leadership on AI mostly by painting a clear picture of what such a future should not include.
'Busy-body bureaucracy'
Railing against former President Joe Biden’s executive order on AI, "the longest (executive order) in U.S. history," Cruz attacked the perception that it embraced AI as "dangerous and opaque." In Cruz's telling, the order laid the groundwork for mechanisms like audits, risk assessments and regulatory certifications, which "inspired similar efforts in state legislatures, threatening to burden startups, developers, and AI users with heavy compliance costs."
For Cruz, even what others would call light touch interventions such as testing regimes are at best unnecessary and at worst actively pernicious. Guidance documents are "something out of Orwell" meant to "usher in what (proponents) call 'best practices' as if AI engineers lack the intelligence to responsibly build AI without the bureaucrats."
Senate Commerce Ranking Member Maria Cantwell, D-Wash., did not explicitly spend time refuting Cruz's picture of AI governance, instead focusing her remarks on the role government funding — via the National Science Foundation and similar efforts — plays in fostering private-sector research and infrastructure growth. Though her support for standards that ensure a clear U.S. approach to AI was implicit in her remarks.
"I’m all for winning," Cantwell said, highlighting her work in the prior term to move swiftly on supporting AI via the passage of the CHIPS and Science Act along with seven other bills that were passed out of committee but, as she put it, "kind of got stuck in the lame duck." Among these, Cantwell explicitly called out her Future of AI Innovation Act, which would have created testbeds for AI and authorized the National Institute of Standards and Technology's AI Safety Institute to develop AI standards, and the bipartisan AI Research, Innovation and Accountability Act, which would have established a certification regime, among other things.
Cruz now appears to oppose all such interventions, referring to the "busy-body bureaucracy" they would create as "a wolf in sheep’s clothing."
Carry on my wayward innovator
Much of the hearing focused, as promised, on infrastructure and hardware and the types of initiatives designed to foster U.S. leadership in those areas. There was also lengthy debate around the role of U.S. export controls and whether they stand in the way of the ability of the U.S. to win the AI race.
But those portions of the discussion that strayed higher up the tech stack included insights about the role of AI governance norms in the near term.
For example, the testimony from Microsoft President Brad Smith reiterated his vision that the AI race is a marathon, not a sprint. As he put it, "The country can win a lap but lose the race if it fails to bring together all the ingredients needed for success." Further, "It is a race no company or country can win by itself."
Open and accessible data was a clear theme among witnesses and Senators alike. In Microsoft's written submission, Smith calls for more open public datasets. "By making government data readily available for AI training, the United States can significantly accelerate the advancement of AI capabilities, driving innovation and discovery."
Standards for AI governance came up in pockets of discussion throughout the hearing. Though Cruz referred to standards as a "code word for regulation," witness Sam Altman, the co-founder and CEO of OpenAI, suggested that industry could be trusted to figure out the right standards and guardrails on its own.
These thoughts echoed Altman’s remarks at IAPP's Global Privacy Summit 2025, where he highlighted the "extreme focus" on privacy and other AI risks from boards and CEOs.
Collectively, the committee members covered the entire playbook of AI governance mechanisms sprinkled through their questions. Sen. Brian Schatz, D-Hawaii, asked about labeling of generative AI content. Sen. John Hickenlooper, D-Colo., had much to say about independent auditing and evaluations. Sen. Todd Young, R-Ind., discussed a bill he is working on related to AI literacy and awareness. Sen. Jerry Moran, R-Kan., discussed privacy and cybersecurity, bemoaning the failure to pass comprehensive privacy protections.
Got my mind on preemption
Overall, the hearing showed just how much work remains to move forward on a bipartisan approach to AI regulations at all layers of the tech stack. But it was also clear such work is already ongoing.
For example, Cruz used the opportunity to hint about a forthcoming piece of major legislation he will soon unveil. He explicitly hopes to echo the approach taken by President Bill Clinton and the Congress of the mid-1990s in fostering a light-touch approach to internet regulation.
The Cruz method will be "intentionally and decisively light touch," creating a "regulatory sandbox for AI." And though there was never an explicit regulatory sandbox per se for the internet in the 1990s, his intent is to re-create the policies of that era, which focused on avoiding premature regulation, encouraging self-regulation, and even providing liability shields such as Section 230.
Among the things Cruz promises this future bill will do, apart from supercharging AI development and adoption, is to "prevent needless state over-regulation."
That is to say, decisively preempting state AI guardrails remains a top priority for the senator from Texas.
Though many of the underlying ideas in AI policy have not evolved in the past year, it is clear we have moved even farther into the fast lane as Congress considers the U.S. approach.
Please send feedback, updates and best practices to cobun@iapp.org.
Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director, Washington, D.C., for the IAPP.
This article originally appeared in The Daily Dashboard and U.S. Privacy Digest, free weekly IAPP newsletters. Subscriptions to this and other IAPP newsletters can be found here.