Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
I often have coffee chats with privacy folks whenever I get the chance. After a couple sips and exchanging familiar privacy clichΓ©s, it always comes to "...and then there's AI."Β
We all know artificial intelligence governance is coming for us. Regardless of your position on whether privacy professionals should own AI governance or not, we can agree it is something we need to understand.
My friends often say, "I am not an engineer, how am I supposed to understand this? I hear the words model drift, bias detection, prompt injection, red teaming, and my mind just blanks out."Β
These conversations always make me smile, not because their fear is funny, but because I recognize it so deeply. I never considered myself a coder, nor claimed to be a technical expert. I have never touched Python in my life. And a couple years ago, I had the same fear when I was first tasked with AI reviews. I remember that feeling of horror.
The narrative around AI has been dominated by engineering voices and research. When entering through this door, it is easy for a privacy professional to feel like a guest in the AI space. But there is a secret no one tells when you start working on AI projects.Β
Most AI governance work is not about algorithms. It is not even about understanding neural network architecture diagrams. It is about designing the right decision-making mechanisms, clarifying accountability, understanding the data processing purpose, and making sure people are not harmed.Β
Do these terms sound familiar? Yes, because this is what privacy professionals do. AI governance is not too different.Β
As always, start by asking the right questions. After all, who knows better than privacy professionals how to ask questions about processes we are not familiar with?
When I started to build privacy-based AI governance, I expected to be a bit lost. And I will admit I googled a lot of terms or, frankly, asked AI to explain them to me. However, I realized very quickly that AI governance is about ensuring transparency, fairness, consistency, responsibility and traceability. Sounds familiar, right? Privacy professionals have been breathing these terms for years.
I understand privacy professionals' fear around not being technical. However, think about all the data protection impact assessments you have written about application programming interfaces, human resources programs, software and more. You were not an expert; you just understood what principles should lead these initiatives.Β
It is quite the same for AI. You do not need to build the model. You do not need to fine-tune it. You just need someone to explain why the model exists, what data feeds it, how the outputs impact individuals, and what safety nets are there in case things go wrong. That is not engineering. That is governance, and you have done this so many times over the years.
Think of a typical privacy workflow. You receive a new product intake request. You are not a product owner, so you ask questions. What data is collected, how does it flow, who accesses it, how long is it stored, how are individuals informed, what happens if someone exercises their rights, and how are risks mitigated? In the world of AI, you need to ask similar questions β but about the model itself.
There is another thing privacy professionals do exceptionally well. We challenge assumptions. We do not take things as they are. When someone says, "I need to collect this data," we immediately ask, "OK, but why?" This very simple intuition is our superpower. If an AI or technology team says everyone uses this training set, a privacy person thinks about the purpose and fairness justification.
AI governance is useless when no one is asking those questions. Privacy professionals ask millions of questions. Guilty as charged. Something interesting happens when you position an innovative privacy professional next to AI engineers or product owners. They are appreciated. They understand privacy is there to make things sustainable, defensible, and trustworthy β not to slow things down.
What makes AI intimidating is not the work itself. It is the psychological barrier. In reality, privacy professionals just need to use the logic applied in day-to-day work. Privacy programs operate on structured frameworks, strong documentation habits, and a deep understanding of user impact. This is what is expected from AI regulations, too.
I will not ignore that there are some technical pieces when it comes to AI governance. Someone needs to understand model drift. Someone needs to validate fairness metrics. Someone needs to check privacy-preserving techniques or fine-tuning guardrails. It might not be you.Β
But think about it for a second. In the privacy world, we already operate this way. We partner with security, product, engineering, external vendors and internal auditors. We ask the right questions and make sure someone accountable provides the answers. We do not need technical heroism but coordinated ownership. Without this coordination, there will be gaps.
Ask yourself: How many times have you led projects to cover these gaps? Did you not talk with the engineering team to help address privacy shortcomings in an HR project? Did you not have a call with security to understand the data transfer security of a migration project?
My message to every privacy colleague who has quietly wondered if they belong in conversations about AI is: Yes, you do. You are not a guest. You are not unqualified. You can protect humans from data misuse, algorithmic risk and model drift. If you can explain complex privacy risks to business leaders, you can explain AI risk too. If you have survived a round of DPIA comments from legal, product and engineering, trust me, you already have the patience and resilience needed for AI governance.
AI is exciting. It is collaborative. It is complex. It is the next chapter. And privacy professionals are natural leaders.
Azelya Tanriverdi, CIPP/E, CIPM, is the director of data privacy at Fitch Ratings.Β


