Amazon-Perplexity dispute raises questions over AI agent liability

Legal dispute highlights unanswered questions about AI agents' authorization and CFAA liability.

Contributors:
Joel Schwarz
CIPP/G
Managing Partner
The Schwarz Group
In November 2025, Amazon filed a complaint against Perplexity in the U.S. District Court for the Northern District of California, alleging that use of Perplexity's agentic artificial intelligence on Amazon's platform violates the federal Computer Fraud and Abuse Act, as well as California's equivalent of the CFAA, the Comprehensive Computer Data Access and Fraud Act (California Penal Code § 502), that was enacted by the state to address unauthorized access to computer systems, networks and data. Amazon also filed for a preliminary injunction against Perplexity, which was granted 9 March 2026.
This case arrives at a pivotal moment — one underscored by the number of third-party amicus curiae briefs already submitted. As AI agents increasingly act on behalf of humans — browsing, clicking and transacting across the web — courts will be forced to confront a deceptively simple question: when an AI bot breaks the rules, who is legally responsible?
To answer that question, we need to revisit two landmarks: the scraping battle between hiQ Labs and LinkedIn, and the U.S. Supreme Court's 2021 decision in Van Buren v. United States, which fundamentally reframed what the CFAA prohibits.
LinkedIn sent hiQ a cease-and-desist letter, asserting that hiQ was violating LinkedIn's User Agreement, that it had "implemented technical measures" to block hiQ's scraping activity, and that any further accesses by hiQ would violate the CFAA. HiQ sought an injunction to prevent LinkedIn from blocking its access to LinkedIn's publicly available website, which was granted in August 2017 and upheld on appeal by the Court of Appeals for the Ninth Circuit. Dissatisfied with the result, LinkedIn appealed to the Supreme Court.
Contributors:
Joel Schwarz
CIPP/G
Managing Partner
The Schwarz Group