Ashley Casovan saw the importance of building trust in governance early.

She grew up in the Canadian province of Alberta, where she said people can be wary of government intervention. That attitude instilled in Casovan a desire to expand government transparency and work to restore confidence in the system, especially with regard to government's use of data.

Casovan now continues that work with the IAPP, where she was recently hired as the AI Governance Center Managing Director. Her title positions her as the IAPP's public voice on artificial intelligence governance and the conduit between industry and policy leaders, as well as international organizers. She takes it on as there is an international push to regulate AI and expand the limits of various systems' technological capabilities.

"The IAPP has been working tirelessly to raise awareness of the importance AI governance will have for organizations," President and CEO of IAPP J. Trevor Hughes, CIPP, said in a statement, calling Casovan's expertise "invaluable" in that mission.

Casovan started her career in the public sector working on various digital government initiatives, starting at the city of Edmonton and then for the Canadian federal government where she eventually turned her attention to AI. She comes to the IAPP following her role as executive director of the Responsible AI Institute, a nonprofit that advocates and develops certifications to reduce harm from using AI systems.  

During her time working at the government of Canada, she led the development of the world's first AI policy, the Directive on Automated Decision-Making for the government's use of AI systems. She has also served as an advisor on ethical AI integration for the U.S. Department of Defense, Department of Labor, Organisation for Economic Co-operation and Development, and the World Economic Forum.

While AI has grown exponentially since her work in Canada, the policy is still sound today, Casovan said, and could be a useful framework for other countries as several race to develop their own governance policy. But the world of AI has changed since then.

"We have seen the technology change significantly since we drafted the directive," she said. "At the time, we didn't see technologies like ChatGPT. And there's been a lot of implications as a result of their generative capabilities, meaning different types of harms exist and therefore different types of mitigation measures need to be thought about."

The conversation around AI has shifted more into the public sphere since then, too. Casovan chalked that up to the rise in awareness of generative AI systems, and software that can create images, text, videos and other content with prompts after being trained on huge amounts of data. Different types of AI until now have largely operated in the background of our daily lives, she said, or only became relevant if you were impacted directly.

"A lot of the work that we've been advocating for over past five to six years related to putting guardrails around AI systems, has been a niche chorus of people," Casovan said. "I think now that we see the real impact of those systems through day-to-day interactions, that's really been a game changer."

AI might be prolific in headlines these days, but Casovan said too often, the industry is talked about as a monolith. That can hide the nuance of the individual technologies, she said.

"They all function differently," she said, "and to lump all of those things into one specific context makes it not only difficult to understand what AI even is, but how we should go about treating these systems from an oversight and government perspective."

Casovan said she hopes to educate people on those AI issues in an effort to help build a robust AI workforce in her upcoming role, saying there is a need to scale the professionalization of AI at the pace AI systems are being developed.

She said she looks forward to digging into the cultural differences nations are taking around AI, and how she can use her international experience at the IAPP.

"I think what the IAPP has done for privacy professionals is an incredible formula," she said. "And it can be applied to not only upskill privacy professionals, but many other individuals who are interested in expanding their AI skill set."