ANALYSIS

Amazon-Perplexity dispute raises questions over AI agent liability

Legal dispute highlights unanswered questions about AI agents' authorization and CFAA liability.

Published
Subscribe to IAPP Newsletters

Contributors:

Joel Schwarz

CIPP/G

Senior Director, Corp Data Privacy

Natera

In November 2025, Amazon filed a complaint against Perplexity in the U.S. District Court for the Northern District of California, alleging that use of Perplexity's agentic artificial intelligence on Amazon's platform violates the federal Computer Fraud and Abuse Act, as well as California's equivalent of the CFAA, the Comprehensive Computer Data Access and Fraud Act (California Penal Code § 502), that was enacted by the state to address unauthorized access to computer systems, networks and data. Amazon also filed for a preliminary injunction against Perplexity, which was granted 9 March 2026.

This case arrives at a pivotal moment — one underscored by the number of third-party amicus curiae briefs already submitted. As AI agents increasingly act on behalf of humans — browsing, clicking and transacting across the web — courts will be forced to confront a deceptively simple question: when an AI bot breaks the rules, who is legally responsible? 

To answer that question, we need to revisit two landmarks: the scraping battle between hiQ Labs and LinkedIn, and the U.S. Supreme Court's 2021 decision in Van Buren v. United States, which fundamentally reframed what the CFAA prohibits.

LinkedIn sent hiQ a cease-and-desist letter, asserting that hiQ was violating LinkedIn's User Agreement, that it had "implemented technical measures" to block hiQ's scraping activity, and that any further accesses by hiQ would violate the CFAA. HiQ sought an injunction to prevent LinkedIn from blocking its access to LinkedIn's publicly available website, which was granted in August 2017 and upheld on appeal by the Court of Appeals for the Ninth Circuit. Dissatisfied with the result, LinkedIn appealed to the Supreme Court.

Concurrently, the Supreme Court decided Van Buren v. United States. Van Buren, a police officer, was convicted under the CFAA for using his authorized database access to run an unofficial license plate search in exchange for money. 

In reversing Van Buren's conviction, the Supreme Court adopted a "gates-up/gates-down" test: accessing areas of a system you are already authorized to access — even for an improper purpose — does not constitute a CFAA violation. A violation requires bypassing an explicit barrier such as a password or technical block — a "down gate." Van Buren left open the possibility of circumventing a down-gate through other than technical measures. 

Importantly, Van Buren noted that if we read the authorized access clause of the CFAA to criminalize every violation of a computer-use policy — such as when access to a website or service is dependent upon "agreement to follow (a) specified terms of service" — we risk extending "criminal penalties to a breathtaking amount of commonplace computer activity."

Based on Van Buren, the Supreme Court remanded the LinkedIn v. hiQ case back to the 9th Circuit, which reaffirmed its injunction against LinkedIn, concluding that a "defining feature of public websites is that their publicly available sections lack limitations on access."

Returning to Amazon's complaint against Perplexity, several similarities emerge. 

Perplexity's Comet AI Agent is alleged to violate Amazon's conditions of use, which require identification of AI agents, and which was communicated to Perplexity executives. Despite warnings, Perplexity continued to access Amazon customer accounts, prompting Amazon to block Comet's digital fingerprint. Perplexity then released a software update to evade Amazon's blocking, after which Amazon threatened legal action, sent Perplexity a cease-and-desist letter, and ultimately filed a complaint.

Amazon's complaint alleges two CFAA violations, neither of which appears to be actionable under the CFAA after the Van Buren decision. While Perplexity may not have been a model Netizen, the CFAA was just not designed to reach violations rooted in contractual obligations.

The first of Amazon's CFAA claims alleges a violation of § 1030(a)(2), in that Perplexity accessed Amazon's servers without or in excess of authorization, masking its digital fingerprint to avoid detection, and autonomously browsed and placed orders within customers' individual Amazon accounts, and thus obtained private information. But Comet only accesses an account after a legitimate Amazon customer logs in with their own credentials and voluntarily delegates access to the AI agent, acting on their behalf. There is no gate to circumvent as the authorized account holder opened the gate for the AI.

Amazon argued, however — and the District Court agreed — that Facebook v. Power Ventures clarified that access authorization of the account owner is not enough; one also needs the authorization of the platform owner. And according to the Facebook case, once a platform explicitly revokes authorization, any future access becomes unauthorized under the CFAA.

But therein lies a subtle yet important distinction between the Facebook and Amazon cases — one that the court did not appear to take into account in its 9 March ruling. 

Specifically, Facebook's cease and desist letter explicitly and unequivocally revoked Power Venture's authority to access its site, putting "Power on notice that it was no longer authorized to access Facebook's computers."

Amazon's cease and desist letter on the other hand, did not revoke Perplexity's access. In fact, Amazon's letter specifically leaves open the door, or more apropos, gate, for Perplexity to continue accessing the platform, so long as it does so transparently and in accordance with Amazon's Conditions of Use.

The letter states, "Perplexity must stop disguising Comet as a Google Chrome browser, transparently identify Comet AI agents ... when operating in the Amazon Store (as required by Amazon’s Conditions of Use)." 

At best, Perplexity is "exceeding authorized access" by failing to identify itself transparently, but that is conditioned on defying Amazon's Conditions of Use, which returns us squarely to a contractual dispute, and which the Van Buren case held is not enough to invoke the CFAA. 

In short, this nuance — access conditioned on contractual compliance — is precisely what Van Buren was designed to address, but one that the District Court appears to have overlooked. Tying CFAA liability to terms-of-service compliance creates ambiguity and confusion, which is always problematic when applying a criminal statute to otherwise civil conduct. 

If anything, it's the customers who are violating Amazon's Conditions of Use. Notably, those conditions, including the provisions governing agents, are addressed to Amazon users that "use, allow, enable, or cause the deployment of an Agent," not to third parties like Perplexity.

Amazon apparently reached a similar conclusion when it initially suspended customer accounts before reversing course. Understandable, but targeting the users who deploy the AI agent may be the more legally appropriate remedy, given the basic agency principle that a principal is liable for their agent’s conduct.

In this regard, Ryanair DAC v. Booking Holdings is instructive. While Ryanair sued Booking Holdings and affiliated platforms, including Kayak and Priceline, for scraping data from Ryanair's login-restricted ticketing portal, Booking moved to dismiss, arguing the CFAA only applies against the party directly performing the unauthorized access. Denying the motion, the court held that a party who directs, induces, or controls another to commit a CFAA violation may itself be vicariously liable, a theory a jury accepted against Booking.com at trial in 2025, before the verdict was set aside for failure to meet the CFAA's USD5,000 loss threshold. Although Ryanair appealed, the parties settled 26 Aug, 2025. 

Nonetheless, the vicarious liability ruling maps directly to agentic AI, where a user directs AI to act on their behalf.

Amazon also alleges Perplexity violated 1030(a)(4) by violating Amazon's Conditions of Use and intentionally masking Comet's digital fingerprint to impersonate legitimate user requests — "with intent to defraud" — furthering the intended fraud and obtaining something of value. But Comet's fingerprint-masking is arguably analogous to hiQ switching IP addresses to evade LinkedIn's blocking — conduct the Ninth Circuit did not find sufficient to establish a CFAA violation. Moreover, if access is authorized form the outset, as argued above, then we don't even reach the technical barriers question. At best, this is in "exceeding authorized access" territory contingent on violating Amazon's contractual restrictions, which is not enough to establish a CFAA violation.

That's not to say Perplexity walks away unscathed — Amazon's concerns are valid and should be addressed through the courts if necessary. But the CFAA is not the right tool. As the Ninth Circuit noted in hiQ v. LinkedIn, options include "state law trespass to chattels claims" as well as "copyright infringement, misappropriation, unjust enrichment, conversion, breach of contract, or breach of privacy."

Regardless of how this case plays out, it's worth watching closely. Automated website access is nothing new, but agentic AI, capable of making autonomous decisions on a user's behalf, raises questions courts have not yet fully confronted. How to allocate legal responsibility for the acts of an AI agent will likely be litigated well beyond the CFAA, and this case may be just the beginning.

 

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Joel Schwarz

CIPP/G

Senior Director, Corp Data Privacy

Natera

Tags:

Litigation and case lawAI and machine learningAI governance

Related Stories