ANALYSISMEMBER

The second wave of AI governance: The risks of ubiquitous transcription tools

Noga Rosenthal explores the risks of employees using AI transcription tools in the workplace and offers a starting place for organizations that may not have yet addressed AI governance and privacy implications of these tools.

Published:

Subscribe to IAPP Newsletters

Contributors:

Noga Rosenthal

AIGP, CIPP/E, CIPP/US

General Counsel and Chief Privacy Officer

Ampersand

A manager begins a video call with an underperforming employee to deliver difficult feedback. Thirty seconds in, a notification pops up: "Otter.ai has joined the meeting." The employee didn't ask for permission. Every word of that conversation is now being transcribed, processed and stored. Or, posing an even greater risk to the company and the manager, the employee is secretly recording the conversation on their cell phone via an AI transcription tool. 

If this scenario sounds familiar, you're not alone. And if it doesn't, it's probably only a matter of time.

We've entered the second wave of AI governance

Most organizations have completed the first phase of AI governance: setting up cross functional AI governance committees, drafting AI governance policies around the use of new AI tools and establishing guardrails around what information employees can put into AI tools. Ideally, employees already know by now that they can't paste an Excel sheet that includes a company-wide list of employees’ social security numbers or information around a medical accommodation into free AI tools.

We've probably congratulated ourselves on this work only to realize that it was the easy part.

The harder challenge is now emerging. We are now faced with managing how employees are actually using these AI tools during their workday, especially as these tools capture any information input by employees. And nowhere is this more pressing than with transcription and recording tools that have become fixtures of every meeting.

The privacy problem

Tools like Otter, Fireflies, Microsoft Copilot and Zoom's transcription service have become ubiquitous. They're genuinely useful, helping teams capture action items, creating searchable records and freeing participants to focus on conversation rather than note-taking.

They're also creating privacy land mines that many organizations haven't addressed.

The core issue is the possibility that employees are now recording and transcribing every team meeting, every sensitive conversation, every one-on-one. Often, they record these sessions without letting other attendees know.

From a privacy and AI governance perspective, this creates several risks.

First, there are consent gaps. Recording laws vary by state. In two-party or all-party consent states like California and Illinois, every participant must consent to a recording. In one-party consent states, only one participant needs to consent. But with employees scattered across multiple states, having your employees determine which law applies to any given call is complex and unrealistic. The practical answer is to treat every meeting as if it requires consent from all parties. The penalties for violating wiretapping statutes can include criminal liability and civil damages. Also, most AI transcription tools on video conferencing calls announce that the call is being recorded.

Second, employees are recording conversations that involve medical information, performance issues, compensation discussions and HR matters, which are sensitive data. This data is being processed by third-party AI tools and stored in locations that may not have appropriate access controls. Similar concerns exist with attorney-client privilege. At the very least, the AI governance committee should review where these recordings are saved and determine who has access to the recordings and transcripts. 

There are also risks tied to data minimization and data retention. Privacy principles call for only collecting the data necessary for a defined purpose. AI transcription tools do the opposite: They capture everything, creating permanent records of conversations that were never intended to be memorialized. 

An organization's AI governance committee may determine that certain departments should not use AI transcription tools at all or train employees not to use them in certain situations such as during sensitive human resources discussions.

The AI governance committee also needs to determine how long these recordings are kept. Recordings may even sit indefinitely in employees' personal tool accounts, creating discoverable records and potential breach exposure.

The existence of a recording changes the dynamic of a conversation. In certain contexts, it can undermine the very purpose of the discussion. For instance, recording performance improvement plans can discourage candid feedback between manager and employee. Companies may be exposed to statements from these meetings later on surfacing in litigation. Similarly, medical information shared in medical accommodation conversations may be protected various laws and privacy regulations.

Under the assumption that most employees are using AI transcription tools, here's where to start if your organization hasn't addressed the AI governance and privacy implications of AI transcription tools:

  1. Update the AI policy to specifically address recording and transcription, including requiring consent and prohibited use cases. You may even consider making this a separate policy.
  2. Create a list of "Do Not Record" meeting types and communicate it across the organization.
  3. Review the recording and transcription retention period and storage location. Where are recordings stored? For how long? Who has access? Align retention policies with your broader data governance framework.
  4. Establish a feedback loop, enabling you to adapt your policies and allowing employees to give feedback as new tools emerge and employee behavior evolves.

The first wave of AI governance was about controlling what goes into AI tools and the type of AI tools employees are using. The second wave is about managing what AI tools capture from us. AI governance and privacy professionals who get ahead of this issue will protect their organizations from preventable exposure and preserve the trust that makes difficult workplace conversations possible.

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Noga Rosenthal

AIGP, CIPP/E, CIPP/US

General Counsel and Chief Privacy Officer

Ampersand

Tags:

Data securityAI governance

Related Stories