Class action lawsuits have long been a means of enforcing privacy rights, as privacy harms "are often spread among a larger group and harder to identify than typical consumer harms." Because privacy harms can be "small," class actions enable plaintiffs to collectively enforce their rights, distribute litigation costs amongst a large group, and impose larger penalties on defendants.
Recently, privacy class action lawsuits have been levied against artificial intelligence-powered transcription services under the Electronic Communications Privacy Act and the California Invasion of Privacy Act. Originally enacted decades ago to protect telephonic and electronic communications from unwarranted surveillance, these anti-wiretapping laws have found new significance due to the emergence of automated technologies capable of eavesdropping. This evolution has been spurred by a creative plaintiff's bar, which continues to find new applications of old privacy laws to pursue defendants.
AI trained on intercepted data may violate ECPA
The ECPA updated the Federal Wiretap Act to include digital and electronic communications. Under Section 2511(a), ECPA prohibits the intentional interception of any wire, oral, or electronic communication using a device, providing an exception to liability where one party consents to interception under ECPA Section 2511(2)(d).
While the CIPA offers a similar scope, the state law contains a narrower consent exception, requiring that all parties consent. Even so, this distinction may not matter where the wiretap is carried out for the purpose of committing a tort, such as intrusion upon seclusion or conversion, which eliminates the one-party consent exception under the ECPA.
In Brewer v. Otter.ai, plaintiff Justin Brewer filed a complaint alleging that Otter Notetaker, which transcribes virtual meetings for Otter account holders through a broader transcription service called OtterPilot, records private conversations between account holders and meeting participants who are not Otter subscribers. The complaint alleges this violates the ECPA, the CIPA and the torts of intrusion upon seclusion and conversion among other causes of action.
When it joins a meeting, Otter Notetaker allegedly seeks consent from the meeting host to join and record the meeting. Under default settings, however, consent is not sought from non-host meeting attendees. Additionally, Notetaker does not notify participants of the real-time transmission of their data to Otter for purposes of transcription and the improvement of its automatic speech recognition and machine learning models. The complaint asserts that Otter's privacy notice claims its AI models are trained on deidentified audio recordings but questions the efficacy of such methods. Moreover, the complaint alleges the privacy policy places the responsibility of obtaining consent on Otter account holders who use Notetaker.
As for Brewer's particular experience, the complaint alleges he joined a Zoom meeting where Notetaker was used by a meeting participant without his consent. The complaint maintains Brewer is not an Otter account holder and had no reason to suspect its usage, nor was he informed that Otter would obtain his conversational data.
Under ECPA Section 2511(a), Brewer claims Otter's actions were intentional. Specifically, he claims that Otter was aware it was intercepting private communications, and that his conversational data falls within the statutory meaning of electronic communications — his data being a "transfer of data by a wire that affects interstate commerce" — as required by ECPA Section 2510(12). Moreover, Brewer asserts the Notetaker is a "device" and that "Otter was not an authorized party to the communications" because he was not aware of, nor did he consent to the collection of his data.
Perhaps most importantly, Brewer claims Otter's use of his conversational data to train its machine learning systems amounts to serious privacy violations, namely intrusion upon seclusion and conversion, "removing any statutory immunity Otter may otherwise claim under the ECPA." Essentially, he attempts to preempt Otter's defense that at least one party to the communication, such as the Otter account holder, consented to the interception, which would eliminate the ECPA liability.
Brewer's preemption theory relies on the final clause of ECPA Section 2511(2)(d), which states that one-party consent does not eliminate liability where interception occurs "for the purpose of committing any criminal or tortious act in violation of the Constitution or the laws of the United States or of any State." The complaint alleges Otter used his communications for the purpose of training its AI systems — an assertion that could support the offensiveness element of his intrusion upon seclusion claim and the damages element of his conversion claim.
Thus, because the alleged interception amounts to tortious conduct, any one-party consent defense that Otter may put forth is neutralized. Intrusion upon seclusion requires a showing that the defendant intruded into a private matter in a manner highly offensive to a reasonable person. As noted, Brewer alleges Otter intruded on a private matter by placing itself in the center of a conversation to which it was not an authorized party. And Otter's violation of federal and state law demonstrates the highly offensive nature of such intrusion, especially when Brewer was never given the opportunity to opt-out from the interception and, as discussed, the interception was undertaken by Otter for "undisclosed motives" including ASR training and machine learning.
The tort of conversion, on the other hand, requires that a defendant interferes with the plaintiff's property with the intent to permanently deprive the plaintiff of such property, therefore causing damages to plaintiff. Brewer claims that he had a property interest in his personal conversational data and did not consent to Otter's appropriation of said property for the company's own financial gain, resulting in alleged damages.
Additionally, Brewer brought a claim under CIPA Sections 631 and 632, alleging Otter unlawfully intercepted messages in transit and unlawfully eavesdropped on confidential communications, respectively, without consent of all parties.
This complaint alleges Otter intentionally intercepted Brewer's communications while they were in transit without the consent of all parties. Because Otter is a separate legal entity and not merely a passive device, it is a third party to the communications. Furthermore, the plaintiff alleges he had an objectively reasonable expectation of privacy while engaged in private virtual meetings and that Otter's violations of the CIPA amount to an invasion of privacy sufficient to confer Article III standing.
The complaint also asserts claims under the Computer Fraud and Abuse Act, the Comprehensive Computer Data and Fraud Access Act, and California’s Unfair Competition Law. The court has yet to weigh in on the merits of the case, and the parties agreed by stipulation to extend Otter's deadline to respond to Brewer’s complaint, and a host of consolidated complaints, to 10 Nov. 2025.
Liability for CIPA violations may extend beyond the software to the vendor itself
Vendors that provide AI-powered services to transcribe and analyze call data for enterprise customers may consider themselves too removed from the conversation for CIPA liability to apply. But where that vendor is capable of using the data for its own purposes, CIPA immunity is not guaranteed.
In Ambriz v. Google, the U.S. District Court for the Northern District of California denied Google's motion to dismiss the complaint, holding that plaintiffs "adequately alleged Google acted as an unauthorized third party to Plaintiffs' calls in violation of CIPA." The plaintiffs alleged they placed customer service calls with various businesses, such as Verizon and Home Depot, that were supported by the Google Cloud Contact Center AI. The Cloud Contact Center AI offers a virtual agent and support to human agents by providing transcripts of the call and smart replies to caller questions; however, the complaint alleged these services are not disclosed to the caller. The plaintiffs further alleged they did not consent to Google transcribing and analyzing their phone calls.
First, the court countered Google's argument that it was not an unauthorized third party within the meaning of CIPA Section 631(a). The court distinguished the extension test, which requires that a software vendor actually use the intercepted data for its own purposes for that vendor to qualify as a third party, from the capability test, which requires that the vendor merely have the capability to use the date.
Deeming the capability test more appropriate for claims under the first and second clauses of Section 631(a), which unlike the third clause, do not require actual "use … or attempts to use," the court found Google was capable of using the data for its own purposes, as Google's own terms of service admit and which the company did not dispute.
Next, the court found the plaintiffs adequately alleged Google, through its AI software, records and analyzes call data in real time, making it the statutory person engaged in the prohibited conduct. For a similar reason, the court held that the plaintiffs adequately alleged Google, as opposed to the Google Cloud Contact Center AI, read call data in transit. Moreover, the court disagreed with Google's argument that calls from a smartphone fall outside the CIPA. The court noted precedent distinguishing a smartphone's internet capabilities from its telephone capabilities, finding the latter falls squarely within the statute. That the plaintiffs alleged Google eavesdropped on their "calls," implying the use of their smartphones' telephonic capabilities and therefore conduct captured by the CIPA.
Finally, the court turned to plaintiffs' claims under CIPA Section 637.5(a)(1) and(h), which prohibit recipients of satellite or cable television subscriber information from monitoring conversations within a subscriber's residence without consent. Since Section 631(a) exempts telephone companies from liability, this section was used to levy claims that Verizon was the Google Cloud Contact Center AI customer.
The court found that Verizon qualified as a "provider of cable or satellite television services" by other Northern District of California courts. It also held that customer service calls plausibly involved sharing subscriber information and that the plaintiff's call from his home satisfied the subscriber's residence requirement. Therefore, the claim under Section 637.5 claim against Google was allowed to proceed.
In its answer to the complaint, Google raised various affirmative defenses, including consent and the necessity of its action for rendering services. It also argued the information shared by plaintiffs was not private and, therefore, the plaintiffs lacked standing.
Somewhat relatedly,Apple faced allegations that its virtual assistant Siri, in violation of the CIPA, recorded users' conversations without being prompted to do so and shared the audio with human third parties in order to improve Siri's functionality as well as refine ad targeting. The plaintiffs and Apple agreed to a USD95 million settlement, though Apple denied any wrongdoing. When the alleged violations first came to light in 2019, Apple suspended the default practice of grading — human review of audio data which Siri collects — allowing users to opt-in to reviews conducted by Apple employees, rather than third party contractors, in order to improve Siri. Further, Apple assured users that its new AI system that assists Siri, Apple Intelligence, primarily runs on users' devices. For more complex queries, Private Cloud Compute ensures that user data is not shared with Apple, and the data is only used to fulfill a specific request.
What can entities providing AI-powered transcription services learn from these cases?
Subject to further judicial guidance, companies should note that the party exception to the ECPA is more limited than it may appear. As the Brewercomplaint argues, using intercepted data to train machine learning models may support a claim for intrusion upon seclusion, particularly when the intrusion is carried out to the financial benefit of the defendant, thereby increasing its offensiveness. Likewise, when a defendant profits from its interference with a plaintiff's data, that result may demonstrate actual damages and fortify a conversion claim. In either case, the defendant's commission of a tort nullifies the party exception under the ECPA.
Next, the mere capability to use data gathered from a prohibited interception is likely enough to deem an AI vendor an unauthorized third party to a communication under the CIPA. In the Ambriz case, the court applied the broader capability test rather than the narrower extension test, exposing vendors to CIPA liability based solely on their technical ability to access the data — even if they never do. Furthermore, the real time recording and analysis of data is a major factor in determining CIPA applicability, as that can qualify AI vendors as persons and communications as in transit under the statute.
Notably, smartphones contain multitudes. As the Ambrizcourt found, its functions can be divided into internet capabilities and telephone capabilities; while the former may not fall within the scope of the CIPA, the latter does. And when a plaintiff uses their smartphone to make calls, it is likely that they are using the telephone capabilities of their device.
As the court in the Brewer case preparesto address deidentification, the line between identifiable and deidentified data remains blurry. For example, HIPAA’s privacy rule extends a reasonableness analysis, deeming health data deidentified if it “does not identify an individual and if the covered entity has no reasonable basis to believe it can be used to identify an individual.” On the other hand, the FTC holds that hashing does not make data anonymous, though that standard may be stricter than mere deidentification. In either case, a non-zero chance of reidentification still exists and, as the Brewercomplaint points out, the question of whether certain methods result in bona fide deidentification remains up for debate.
Finally, placing the burden of obligation to obtain consent on users may be permissible from a legal perspective but is a poor strategy from a reputational standpoint. Companies should obtain prior consent from all parties involved in a communication, including for virtual calls.
To avoid liability and the punishing consequences of class-action litigation, entities that provide AI-powered transcription services should follow these insights.
Will Simpson, CIPP/US, is a Westin Fellow at the IAPP.
