Artificial intelligence and machine learning raise a kaleidoscope of interconnected issues for policymakers. Even as the rise of generative AI applications focuses our attention on the role consumer protection law could play in regulating AI-powered systems, other policy narratives still dominate the conversation. How can the U.S. ensure continued "competitiveness" on AI compared with other countries? How do we ensure responsible government and military applications of new tools? Do new algorithmic impacts demand new individual rights, or just updated guidance on existing protections such as antidiscrimination rules?

Untangling these concerns matters because different congressional committees have primary authority over solving different legal challenges. For one, the Senate Committee on the Judiciary is not waiting for an invitation to investigate AI issues. This week, the body's Subcommittee on Courts, Intellectual Property, and the Internet held a hearing on patents, innovation and competition in AI.

At the same time, the judiciary's Subcommittee on Privacy, Technology and the Law is also thinking about solutions. According to Axios, Sen. Josh Hawley, R-Mo., ranking member of the subcommittee, is unusually aligned with chair Sen. Richard Blumenthal, R-Conn. Reportedly, Hawley is "unveiling, and circulating to colleagues, a framework for AI legislation focused on corporate guardrails. The five principles in the outline, according to a document viewed by Axios, are:

  1. Creating a legal avenue for individuals to sue companies over harm caused by AI models.
  2. Imposing stiff fines for AI models collecting sensitive personal data without consent.
  3. Restricting companies from making AI technology available to children or advertising it to them.
  4. Banning the import of AI-related technology from China and prohibiting American companies from assisting China's development of AI.
  5. Requiring licenses to create generative AI models."

Though the details of the proposal are not public, Hawley's support for a private right of action seems unlikely to be shared by many fellow Republicans.

Almost certainly, legislation addressing these points would need to go through the Senate Committee on Commerce, Science and Technology. But it remains to be seen if the working relationship and legislative approach Chair Sen. Maria Cantwell, D-Wash., and new ranking member Sen. Ted Cruz, R-Tex., will chart on issues related to technology, data, innovation and fairness.

Meanwhile, Senate Majority leader Chuck Schumer, D-N.Y., released a letter to his colleagues announcing internal briefings on AI this summer, including a "first-ever" classified briefing on current military and intelligence uses of AI and "what do we know about how our adversaries are using AI?" This builds on Schumer's efforts to bring together a working group and refine a discussion framework on AI issues.  

And let's not forget the House, where last year's bipartisan American Data Privacy and Protection Act proposed new rules governing AI systems. In addition to antidiscrimination and algorithmic decision-making provisions, the bill included mandatory algorithmic impact assessments and design evaluations, all of which would be enforced by the U.S. Federal Trade Commission and state attorneys general. Given that interest in AI rules has only increased since the last version of ADPPA was released, we are likely to see the active role for privacy rules on AI continuing in the next version.

Here's what else I'm thinking about:

  • The U.K. and U.S. are building a data bridge. Prime Minister Rishi Sunak and President Joe Biden released the "Atlantic Declaration," announcing a commitment "in principle to a new UK-US Data Bridge" and U.S. support for the U.K.'s summit to bolster the safe and responsible development of AI. The data bridge would be a "UK Extension to the EU-US Data Privacy Framework, subject to the UK's data bridge assessment ... dependent on the US designation of the UK as a qualifying state under Executive Order 14086."
  • Details matter for kids' privacy, to the tune of $20 million. The FTC settled a new complaint against Microsoft, based on alleged gaps in the company's compliance with the Children's Online Privacy Protection Act Rule in its Xbox service. This is the second case in as many weeks focused on the affirmative deletion requirements in COPPA. Keeping kids' personal data longer than reasonably necessary to deliver the service — such as account creation data collected and stored for users who never proceeded past the parent permission stage of sign-up — is not allowed under the COPPA Rule. Microsoft also allegedly collected users' phone numbers too early in their sign-up flow, before parents gave permission, and failed to provide the proper level of detail to parents in consent notices.
  • New industry standards for the use of AI in hiring and recruiting. BBB National Programs and its Center for Industry Self-Regulation published the results of an industry convening that developed principles for the use of AI-powered systems in the employment context. In addition to a detailed set of governing principles, this industry-led convening created protocols for an independent certification mechanism to bring accountability to these systems, aligned with the NIST AI Risk Management Framework. Even as regulators take a close look at the use of AI in employment, efforts like this could help to reduce the friction between AI developers and AI deployers, refining our understanding of who bears responsibility for these systems.
  • Nevada sent its version of the My Health My Data Act to the governor's desk. SB 370, now passed by both houses, does not create a private right of action and also distinguishes itself from Washington's law with narrower definitions and restrictions. Connecticut also recently passed expanded health data privacy rules.
  • The adult content industry thinks age-verification should be solved at the hardware layer. CNN reports that Pornhub has been lobbying Microsoft, Apple and Google "to jointly develop a technological standard that might turn a user's electronic device into the proof of age necessary to access restricted online content." The company is also beseeching its users to contact their representatives to stop the spread of the types of rules that caused it to block IP addresses in Utah.

Upcoming happenings:

  • 10 June at 13:00 EDT, the Future of Privacy Forum and friends will march in the 2023 Capitol Pride Parade. All are welcome to join, but you must register.
  • 11 June at 23:59 EDT, speaker submissions are due for IAPP's AI Governance Global in Boston on 2-3 Nov.
  • 12 June at 17:30 EDT, the Internet Law & Policy Foundry presents its annual Foundry Trivia (Terrell Place).

Please send feedback, updates and guardrails to cobun@iapp.org.