Artificial intelligence's ability to revolutionize society and economic productivity will largely be shaped by how governments react to various deployments and use cases. The most common reaction will be the introduction of new laws, even as AI deployments simultaneously challenge and potentially upend existing laws and established legal frameworks.

At the IAPP and the Berkman Klein Center for Internet and Society's Digital Policy Leadership Retreat 2025, legal professionals and scholars discussed the legal fallout stemming from novel applications of AI. Conversations included observations on how training models are throwing a wrench into long-held principles of copyright law and the legal system itself.

AI's impact on U.S. legal cases

Issues surrounding AI are already being litigated in the U.S. court system.

U.S. Court of Federal Claims Judge Molly Silfen said during a retreat breakout session that courts are confronting a range of major legal concerns related to AI. Potential copyright violations by AI developers, the use of AI in obtaining patents, lawyers' use of the technology and the ability of judges to use AI to write their decisions are among the unresolved issues.

High-profile developer copyright cases are beginning to set some precedent.

According to Reuters, the U.S. District Court for the Northern District of California recently ruled Anthropic did not violate copyright laws by using books to train its AI system. Despite allegedly not violating copyright laws, Anthropic must face a trial after Judge William Alsup claimed the company "infringed the authors' copyrights and was not fair use."

Frank Stanton Professor of the First Amendment at Harvard Law School Rebecca Tushnet said the impact AI will eventually have on perpetuating potential trademark violations will require rethinking foundational approaches that have defined current trademark law.

"The liability arguments for trademark and rights to publicity always depend on the outputs, unlike the AI training arguments we see in copyright law," Tushnet said. "These outputs usually come from noncommercial end users, which means that the intuition lawmakers have that AI means we have to regulate more heavily, or we need to give people more rights to their rights to publicity; we're going to have to start covering activities that we would have previously been considered non-commercial (under trademark law)."

According to Tushnet, the future use of generative AI to create content and behavioral advertising based on personal data could pose a disruption in the legal ecosystem. Courts have fostered accountability among advertisers that would otherwise be subjected to class action lawsuits in the event they falsely marketed a product.

"Currently a lot of advertising law enforcement is done through class action (lawsuits). You get people who have been exposed to the same ad and they sue over the misrepresentation that was in that ad," Tushnet said. "I'm not sure how long that will stay viable in a world where the average consumer has seen an ad that is tailored just for them."

Questions around AI's use in legal proceedings is also a thorny area.

Silfen indicated her court has begun developing rules to govern how AI can be used by lawyers, including the potential implementation of AI disclosure requirements in the preparation of legal briefs. If a lawyer applied AI, they would have to attest that the submitted information was verified and accurate.

However, she said her court ultimately has refrained from putting rules into force for the time being.

"What we came down to, at least for now, is that there was not a strong case for us to develop a specific rule disclosing AI uses because, to some extent, it is always a lawyer's obligation to make sure that what they say to the court is accurate," Silfen said. "I still worry about AI. I worry about it from a confidentiality perspective because the inputs (in legal filings) are confidential and I don't want the drift getting out there, so there's sort of a minefield to navigate.”

How AI is affecting global copyright frameworks

Copyright protections are arguably facing the most legally disruptive impacts from the proliferation of AI.

In a retreat breakout session, Harvard Law School Professor William Fisher presented research that examined how 16 countries and the EU as a whole are applying copyright law to AI development and use.

The research generated eight AI legal criteria commonly used across jurisdictions. The criteria included where the ability for AI developers to train models using copyrighted material is illegal, fair-use privileges, exceptions for both commercial and non-commercial text data mining with and without opt-outs for rights holders, extended collective licensing provisions for rights holders, and transparency obligation requirements.

Among the jurisdictions Fisher covered, China, France, Saudi Arabia and the United Arab Emirates are either considering or have some legal mechanism in place to prevent using copyrighted material in training AI. Israel is the only country that currently allows some form of fair use of copyrighted work for training AI. Canada, India, South Korea and the U.S. are exploring potential legislation to allow fair-use exceptions to varying extents.

With transparency obligations, Fisher said the EU as a bloc is the only jurisdiction of the 17 he examined that requires developers to disclose what material their AI models were trained with. However, he noted said Brazil, Chile, France and the U.K. were in varying stages of drafting potential legislation governing transparency obligations on the part of AI developers.

"(The different jurisdictions) have very polar positions," Fisher said. "Some are taking a very harsh view on this activity. Some are taking a very permissive view, and there are lots of positions in between."

World Intellectual Property Organization Copyright Law Division Director Michele Woods said the absence of a standardized global copyright regime leaves her organization tasked with administering several international copyright treaties between the WIPO's 194 member states.

Woods said international organizations like the WIPO will serve a crucial role as generative AI puts further strains on countries' existing copyright legal frameworks. That role includes potentially setting a baseline standard for how copyright holders can be adequately compensated while also avoiding undue burdens on AI innovation.

 "At least on a minimum level, we think we have a role of harmonizing some aspects of (countries' own) copyright laws,” Woods said. "In terms of taking a position on AI, we follow our member states. We have quite clearly set out a human centered approach that is focused on the impact (of AI) on intellectual property, individual creators and innovators."

Alex LaCasse is a Staff Writer at the IAPP.