Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Over the past several months, I have found it interesting to follow shifting terminology use in the artificial intelligence governance ecosystem. To be clear, this is a mental list, not a detailed spreadsheet — my usual preference.  

Having had the opportunity to work within the AI governance community since 2018, I have seen words like ethics come and go. Accountability, safety and even responsibility seem to have been replaced by action, innovation and impact.  

I provided the above caveat to indicate this identified trend should not be taken as empirical analysis, but merely an observation. I find these terminology changes particularly interesting to track because they are not simply interchangeable terms gaining or losing popularity. These word choices are used to indicate current drivers or motivations for why AI needs to be governed.  

These terms have seemed to fluctuate to match the politics of the day. However, public sentiment has played a crucial role in shaping AI governance practices.   

One word that has been a mainstay in the AI governance space is trust.  

While actual empirical analysis might indicate bouts of increased popularity over the years, public trust was the main motivation behind the AI policy work I was involved with when working in the Government of Canada.  

A spring 2018 Ipsos survey helped us understand the policy problem we were trying to solve. Results indicated 51% of Canadians were skeptical of new technologies, including AI. However, a favorite finding from this survey was that "the more it matters the lower the support for AI." This was an early indication that when AI is being used for what we now refer to as high-risk operations — health care, insurance, job decisions, etc. — people are more suspect of using it, or it being used on their behalf.  

While I doubt an AI governance report or article has been written since 2018 that hasn't used the word trust at least 20 or 200 times, I have recently found the use of the term has been reinvigorated. And while a couple surveys previously asked questions about trust, several public sentiment surveys have focused on "trust in AI."  

How trust leads to adoption

The number of these surveys popping up was fascinating, but I also found the maturity at which this conversation has evolved striking. It seems there is no getting around trust if we want increased AI adoption.  

While earlier surveys asked general, blanket questions around how people felt about AI — even its use in different sectors — more detailed surveys are now spanning regions across the world and exploring what causes trust, or a lack of trust.   

The University of Melbourne and KPMG recently published a global study entitled "Trust, attitudes and use of artificial intelligence." It compared rates of trust to rates of acceptance and found compelling outcomes. For example, while people are concerned about the safety and security of AI systems, they also trust their technical capabilities. Not far from the 2018 numbers in Canada, this survey found 54% of respondents don't trust AI.  

However, 72% accepted the use of AI. This provides intriguing insights for AI governance professionals. While people have reservations about using AI systems, they are willing to take the risk.  

Imagine the acceptance rates if people had increased trust?

Just days ago, a New York Times article by Robert Capps identified that roles focused on ensuring trust in AI system will be in high demand in the future. 

Why AI literacy matters

The University of Melbourne and KPMG survey also looked at AI adoption rates when there is a higher degree of AI literacy. The study unsurprisingly indicates higher rates of AI use when people identified as being AI literate.  

In February, Salesforce updated its Generative AI Statistics for 2025. The software company found 61% of surveyed desk workers used generative AI. However, 73% believed generative AI introduced new security risks, with 60% indicating they don't know how to use the technology in a way that ensures sensitive data is secure and with confidence in the data source.  

These reports and others start to dig into the source of concern.  

If you don't already follow the AI Incident Database, it's a great tracker of reported AI incidents with a regular roundup of reported trends.  

The trends I see in the April and May roundup — including financial fraud, deepfakes and disinformation, legal and institutional misuse and exploitation through generated content — are well-aligned with the concerns cited in these public sentiment surveys. This indicates people are becoming more aware of the issues as the use of these systems increases. 

For those interested in this topic, I recommend reviewing the studies above in more detail as there are compelling statistics that could be relevant to your work — the regional divides when it comes to rates of adoption and areas of concern are particularly interesting.   

As I mentioned, several surveys on trust in AI have recently come out. I wanted to capture some of the key themes and points of interest, but there were too many to include here.  

Here's what I'm currently reading on the topic: 

Where AI governance professionals will be crucial

Circling back to word choices, I'm sure we have all been in a situation where we are using the same word to mean different things. While it can be frustrating, this common issue is exasperating in the AI governance context, especially given the vast amount of people involved across the AI value chain.  

When individuals from different roles, levels of technological maturity, perspectives and often regions of the world come together to work on the same product, not only will their understanding of trust issues be different, the solutions to tackle these challenges can be difficult as well.  

As Capps' New York Times article indicates, when considering the future jobs that will result from increased AI use, it will be crucial to have someone to translate between these different roles and perspectives — not only for effective AI product development, but to build a product people trust.   

Ashley Casovan is the managing director, AI Governance Center, for the IAPP.

This monthly column originally appeared in the AI Governance Dashboard, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.