The IAPP’s "Profiles in Privacy" series features a monthly conversation with a notable privacy professional to discuss their journey in privacy, challenges and lessons learned along the way, and more.

In 2011, Kay Firth-Butterfield read a Time magazine article titled "2045: The Year Man Becomes Immortal," which projected computers would reach the ability for human-level intelligence by 2030, and that by 2045, "the quantity of artificial intelligence created will be about a billion times the sum" of all human intelligence.

As a former Barrister-at-Law in England and professor in the U.S., Firth-Butterfield focused on human rights and human trafficking, and the article got her thinking what it would look like for human beings to live with machines that powerful, and the important role governance should play.

In the 14 years since, she has become a pioneer and leading expert in the field, receiving a TIME100 Impact Award in 2024 for her role in helping to shape responsible AI governance.

In 2014, Firth-Butterfield became the world's first-known chief AI ethics officer for an AI startup in Austin, Texas, and subsequently served as former head of AI at the World Economic Forum and co-founder of the Responsible AI Institute. She is one of the authors of the IEEE Standards Association's "Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems," and has helped to create resources — including toolkits and guidelines — to foster responsible AI.

"The golden thread that runs throughout my career is I care very much about people. I care very much about the planet and what the future of this planet and our kids is," said Firth-Butterfield, now CEO of consultancy firm Good Tech Advisory. "I personally would like to try and make this planet a better place to live for everybody, and so, that means, really, my life has been dedicated to protecting both."

While Firth-Butterfield has long been focused on discussing the benefits and potential harms of AI technology, the importance of and need for good governance, and fostering what that looks like, she said the conversation around AI governance really grew in recent years after the EU created its AI Act and AI chatbot ChatGPT took off.

"We could do things today that set us on the right path for 2045 or 2050. Equally, we probably could have done things in 2011 for now and for later," Firth-Butterfield told IAPP AI Governance Center Managing Director Ashley Casovan during a conversation on LinkedIn. "But we sort of don't do it and then we're all scrambling, as we are doing now."

While the EU established the first formal rules around AI, countries throughout the world, including the U.S. — and more prominently states within it — are wrangling with AI governance, while the technology itself continues to rapidly advance.

About one-third of U.S. states have some sort of AI legislation, Firth-Butterfield said, whether it be around AI and education or AI and human resources, for instance, or a more comprehensive legislation like the Colorado AI Act, which is set to take effect in 2026.

Meanwhile, companies like ChatGPT's parent company OpenAI and Google are working to create super intelligent AI, she said, while members of society as human beings have yet to say that's something they want.

"Should that be a commercial imperative and should commerce make that decision, or should we the people and our elected governments be able to make that decision? It's such an important decision that I do think it should be made by us," she said.

Firth-Butterfield said she believes strongly in education around responsible AI and often speaks to organizations and leaders of companies, hospitals, governments and more about a wide variety of topics around AI, including its environmental cost and "the things it can do and the things it can't do so well." She said she encourages people "to really think through carefully what they are asking the vendors who are providing them with AI — so good procurement practices."

"This is foresight, this is really thinking about the future of their business and how good governance is going to play a huge part in getting the AI story right," she said, adding there are leadership and management decisions and considerations, as well. "I'm asking them to think about what happens in their businesses when they're evaluating ability or leadership roles and things like that, because if everybody has become superhuman, i.e., everybody is using AI, how do you choose your next leaders."

Good governance and wise deployment of AI goes beyond risk and compliance, Firth-Butterfield said, and is something everyone within an organization should be "coming together over." Most companies have a mission statement, from which they can build principles around AI. She said an educational AI program for employees can also be beneficial, to help them better understand the technology and how they can, and should not, use it.

"It takes away the fear of AI as a tool and it'll also help people recommend, you know, we could use it this way, we could use it that way. But if nobody actually understands it, and everybody fears it, it's not a good environment you're introducing into the company," she said.

Firth-Butterfield said she is "laser focused" on getting all organizations to have good policies and practices around the adoption of AI. She is also writing a book on where humanity fits with AI, currently titled "Co-Existing with AI — Your guide to working, playing and loving with AI," and as a breast cancer survivor, is working on a paper about how AI might "wisely" fit into the services an oncologist offers.

"I believe that AI can be a fabulous tool, but only if we do good governance and we are extremely thoughtful. I've started using the word 'wise' about how we use AI," she said. "Those are the sort of things that keep me passionate and keen to go to work every day."

That work was acknowledged in a significant and meaningful way, Firth-Butterfield said, through the TIME100 Impact Award.

"Lots of people can work in the business of AI but actually being credited with having made an impact in the governance of AI, that was huge," she said. "And of course, there are hundreds of people who worked with me along the way, so it felt as if it was an award for everybody who has been thinking about governance."

Jennifer Bryant is the associate editor for the IAPP.