U.S. Congress' latest attempt at crafting comprehensive federal privacy legislation comes as the digital policy landscape is focused on how the concept of data privacy intersects with artificial intelligence.

The American Privacy Rights Act discussion draft includes some holdover AI provisions from its predecessor, the American Data Privacy and Protection Act, including how it defines covered algorithms and civil rights protections. Blending privacy and AI regulation quickly became a goal for U.S. lawmakers as they sought to address a balance between AI innovation and harm mitigation while deployments increased dramatically in recent years.

"I strongly believe that the bedrock of any AI regulation must be privacy legislation that includes data minimization and algorithmic accountability principles," House Energy and Commerce Ranking Member Frank Pallone, D-N.J., said during an October 2023 House subcommittee hearing on AI. "Clearly defined rules are critical to protect consumers from existing harmful data collection practices and to safeguard them from the growing privacy threat that AI models pose."

Pallone still wants the APRA to fit the current AI landscape, according to comments made during the House Energy and Commerce's latest subcommittee hearing 17 April exploring the APRA's prospects. He said the discussion draft would need to be looked at closely during further drafting to see if it "adequately reflects what we've learned about artificial intelligence."

A handful of AI-specific bills have been introduced, but most policy endeavors at the federal level have stemmed from the White House's executive order on AI — which called for a comprehensive privacy law to address the privacy harms that can come from companies using AI to extract and use personal data.

Looking at AI directly

Sections 13 and 14 of the APRA discussion draft deal with AI more explicitly. The bill tackles AI through "covered algorithms," or a process that uses "machine learning, statistics, or other data processing or artificial intelligence techniques," or other computational processing techniques that either make decisions or help human make one when using covered data.

Use cases, according to the draft, include algorithms deployed for "determining the provision of products or services or ranking, ordering, promoting, recommending, amplifying, or similarly determining the delivery or display of information to an individual." Covered data is defined as information that either identifies or could be linked to a person.

The proposed APRA allows for a person to opt out of the use of algorithms that make or help make a consequential decision. It also requires large data holders which use covered algorithms in consequential decisions to conduct impact assessments two years after the law takes effect. 

Impact assessments will require detailed descriptions of the design process and methodologies, statements of purpose and a detailed description of the data used to train the model powering a given algorithm. Data holders will have to show steps they are taking to prevent harm to minors, bias, or access to needs such as housing, education or public accommodation. Developers that create an algorithm will have to conduct a design evaluation prior to its release. 

The Federal Trade Commission in conjunction with the Secretary of Commerce is charged with studying impact assessments and evaluations and reporting on best practices for both and how to reduce harms three years after the law takes effect. 

There's also a focus on civil rights in the APRA's AI provisions. Covered entities are not allowed to collect, process, retain or transfer covered data "in a manner that discriminates in or otherwise makes unavailable the equal enjoyment of goods or services on the basis of race, color, religion, national origin, sex, or disability." Exceptions are made for when an entity or service provider are self-testing to check for discrimination, diversifying an applicant pool, any private club not open to the public or when advertising or soliciting economic opportunities to protected class populations.

The FTC is empowered to report any civil rights violations for the appropriate authority to investigate. The agency is also authorized to launch rulemaking around how impact assessments are to be submitted and which algorithms are considered low-risk enough to not require an assessment. It would issue guidance around how to comply with the opt-out and civil rights provisions.

Considering future impact

Stakeholders said they would be watching the bill closely to see how both direct and adjacent AI issues are handled.

For instance, the section around data minimization does not mention AI directly, but could have big consequences for the industry, according to WilmerHale Partner Kirk Nahra, CIPP/US.

The bill would limit covered entities to only collect the data it needs to provide a service, communications or for other select purposes such as protecting data security or complying with legal obligations. Nahra said lawmakers should consider how any data minimization provisions might affect AI, noting some sensitive data may be useful to training algorithms when properly protected and there should be a way for them to collect data responsibly. 

"Data minimization is a fundamental tension between privacy regulation in AI," he said. "The two are fundamentally inconsistent, and there's no way around that."

Luminos.Law Partner Brenda Leong, CIPP/US, said the proposed APRA has some synergies with the EU's AI Act, such as how it views risk around large data holders and the requirements for impact assessments. However, she cautioned that attempting to regulate AI within the context of a privacy bill could be tricky.

Leong said traditional privacy protections around data collected for automated decision-making technology do not always fit neatly together.

"I think (the current APRA discussion draft) is a missed opportunity for the U.S. to stop calling it 'privacy' and start calling it data governance, more generally," she said. "It would manage expectations more efficiently."

Consumer Reports Director of Technology Policy Justin Brookman indicated provisions around preventing discrimination are likely to be noncontroversial given the attention being paid to the dangers of bias connected to data being used to train AI algorithms.

"Most people, at least out loud, are not going to be saying 'No, we should not be checking our assessments for bias,'" he said.

But Brookman said he would like to see more clarity in future APRA drafts around how people will be informed if an algorithm is being used in a consequential decision and how to appeal such a decision. Leaving the current draft language as is may cause uncertainties as the FTC's powers under the current APRA draft only allow for guidelines instead of rulemaking on the topic of consequential decisions.

"So even like five to 10 years from now, we still might not even know what some of this means, which I think would be bad for consumers and companies," Brookman said.

BSA, the Software Alliance, Senior Director of Policy Shaundra Watson is watching to see how Sections 13 and 14 of the APRA draft and their AI governance aspects evolve. She is also focused on what sort of directions the FTC will supply companies regarding how to best manage various AI risks that may arise.

"We really want to ensure that the roles are properly assigned so that companies are best able to implement the rules and mitigate the risks they have the ability to manage," she said.