September, like many months in 2025, was a whirlwind of artificial intelligence governance-related activity. Whether by luck or great strategic foresight on behalf of AI event organizers, I had the opportunity to be in some of the cities where headlines were being made. Being close to the action always provides unique perspective and the ability to gather local insight. Below, I highlight some of the major AI governance announcements from the past month and share some common themes I’m seeing across these regions.
London
First, I headed to London to participate in the TechUK and AdaLovelace workshop on "Mapping the Responsible AI Profession." This was a great opportunity to meet with civil society, government, compliance and technology leaders.
While there, I heard significant chatter about the much anticipated U.K. and U.S. tech partnership, which days later was revealed as the "Technology Prosperity Deal." This memorandum of understanding focuses on a variety of potential joint initiatives: data sharing for mutual medical advancements, using AI for space exploration and significant AI infrastructure investment, including commitment from U.S. tech firms to a combined 31 billion GBP investment for data centers, computer chips and advanced tech. As we saw with the U.S. AI Action Plan, the top line of this agreement is to accelerate AI.
Though not in the headlines, AI governance policy did get a mention. The direction is clear: the emphasis will be on "advancing pro-innovation AI policy frameworks and efforts to support U.S. and UK-led AI technology adoption." How this direction will work with the U.K.'s Data Use and Access Act 2025 will be interesting to watch. This idea came up during a recent discussion between my colleague Director of Research and Insights, Joe Jones with Government Data Protection Leader Robin Edwards.
To build on the direction in this MoU, a ministerial-level working group will be established within six months with the intent for the MoU to be operative in 12 months.
Boston
During our AI Governance Global conference in Boston, we saw more information come out of Washington about the AI legislation moratorium for U.S. states with Sen. Ted Cruz, R-Texas, announcing that the concept is "not at all dead."
This was days after the much-discussed dinner at the White House where the biggest tech leaders met to discuss the state of AI in U.S. While this dinner and the perspectives shared by the tech leaders dominated the headlines, the Government Accountability Office released a report on rules that are already in place within the U.S. government guiding AI development and oversight.
It was interesting to speak with AI developers, deployers and users in person at AIGG who were processing all of this information and seeking to understand what it means for their adoption of AI.
From my vantage point, the acceleration message is clearly being received by companies. There is a strong desire to adopt AI, whether to improve internal processes or use for service delivery. Similar to what we are seeing in the U.S. federal context, with regard to acceleration, there are still governance practices emerging to support the adoption of AI.
Montreal
The Canadian AI community was out in full force for the ALL IN conference held in Montreal. I tried to capture the highlights, but many of the sessions were recorded and now available for viewing on their site.
While it was incredible to see the advancement in AI research and projects taking place across many Canadian companies, universities and AI labs, the highlight for me was the direction shared by Minister of Artificial Intelligence and Digital Innovation Evan Solomon in his keynote address, which provided clues to future AI oversight.
With his responsibility for AI and digital innovation, a key theme in his speech was about trust. Using similar language as his international peers, the emphasis on accelerating AI was very present. However, this was coupled with the recognition that accelerating AI adoption in Canada is dependent on public trust, which is currently low.
He shared that while AI legislation in the form of the previous Bill C-27 was controversial, there will be an emphasis on updating privacy legislation in Canada in the age of AI. What this means and the extent of these changes will be determined by a forthcoming AI strategy that is supported by a newly announced an AI Strategy taskforce.
With the goal of developing and adopting this AI strategy by the end of 2025, there is a public consultation open until the end of October to receive feedback from Canadians on how they want to see AI designed and deployed. With many other informative discussions from leaders at Cohere, NVIDIA, Mistral and Yoshua Bengio's new venture Law Zero, another key theme was AI sovereignty. Outside of nation-to-nation agreements, we are seeing this on the domestic agendas of many nations across the world.
San Francisco
There is no better place to meet with AI leaders than in California. I thoroughly enjoyed my conversations and the panels at Credo's Agents of Trust Summit at the end of September, which also coincided with the passage of state AI legislation in California.
SB53 is a revised version of SB1047 originally tabled and championed by Sen. Scott Wiener, D-Calif. As my colleague Caitlin Andrews outlines in her article on SB53, this legislation creates new disclosure requirements for large model developers that may become precedent-setting for other states. While limited to catastrophic risk created by large model developers, these rules provide direction and best practices that other AI developers and deployers may choose to follow.
While there was initial speculation about whether Gov. Gavin Newsom, D-Calif., would support the bill, not only did he sign SB53, but he put out a press release noting the importance of AI being safe, secure and trustworthy.
The scope of SB53 may not impact a broad spectrum of AI uses and address all types of harms the community is concerned with, however, it demonstrates that U.S. state lawmakers are looking at the impacts of AI and what types of rules they can establish for these systems.
What this means
While, yes, globally there remains an anti-regulation sentiment for AI, we are starting to see exceptions and workarounds for how AI will be governed. I see this happening through two pathways.
First, by augmenting existing regulations to address AI concerns. For example, while Canada is not committing to AI-specific national legislation at present, they are seeking to review privacy legislation which is anticipated to address some AI issues. This is similar to what we are seeing in the U.K. with the Data Use and Access Act 2025 which addresses acceptable use of copyright in AI.
There are also calls for sectoral legislation in perceived high-risk areas like health care, finance and transportation, which are being augmented to address increased use of AI in these capacities.
As such, it is likely that in the near term we will not see comprehensive AI regulation, like the EU AI Act, in other regions of the world. Even in Europe, this month saw continued calls for slowing down enforcement of the AI Act in order to benefit acceleration.
Second, government and business leaders alike seem motivated to address the trust problem. Many have indicated that they understand the simple equation: shaky consumer trust will be a barrier to adoption of AI. In my first article in this series, I explored this topic of trust; how it has come back in use, and how public trust in AI varies around the world.
Solutions often raised to address the trust problem outside of regulation are through the advancement of AI literacy initiatives, standards and benchmarking development, as well as a more robust assurance market.
This leads me to believe that the combination of these soft regulatory efforts combined with updates to existing digital and sectoral legislation and guidance will be how guardrails for AI oversight evolve in the coming years.
Ultimately, this patchwork of soft and hard regulation will make the job of AI governance professionals more important as there will be a need for an accountable person, or group of people in organizations developing, deploying or using AI to know what rules and best practices to follow.
- The outcomes and impact of the U.S.-U.K. tech deal
- "Unveils AI Policy Framework to Strengthen American AI Leadership"
- "Exclusive: Kratsios details White House AI plans"
- "US-UK pact will boost advances in drug discovery, create tens of thousands of jobs and transform lives"
- "Federal government (Canada) planning public registry for its new AI projects"
- "California's new AI safety law shows regulation and innovation don’t have to clash"
- "This California law will require transparency from AI companies. But will it actually prevent major disasters?"
- "How Americans View AI and Its Impact on People and Society"