Op-ed: AI governance rules are being written without you

AI governance is increasingly being shaped by procurement decisions, security exceptions and diplomatic pressure, more decisively than any regulation currently on the books.

Contributors:
Náthaly Calixto
AI governance and geopolitics specialist, independent consultant
Editor's note
The IAPP is policy neutral. We publish contributed opinion pieces to enable our members to hear a broad spectrum of views in our domains.
If you work in artificial intelligence governance or privacy outside the United States, the past two weeks should have your full attention. Not because of any new regulation, but because of three events that reveal how the rules are actually being written.
First, the Pentagon reportedly designated Anthropic a "supply chain risk," a label typically reserved for foreign adversaries threatening critical military infrastructure. The reason was not a security breach. It was a contract dispute: Anthropic had sought to restrict how Claude could be used in military operations, and the Department of Defense responded by treating that disagreement as a national security problem.
Second, Claude was reportedly used in U.S. military operations against Iran and was involved in operations in Venezuela, in both cases apparently beyond the usage restrictions its own developer had attempted to set. Details remain unconfirmed by either Anthropic or the Pentagon.
Third, 25 Feb., the Department of State instructed U.S. diplomats worldwide to actively oppose foreign "data sovereignty" and data-localization initiatives, framing cross-border data governance not as a legitimate regulatory choice, but as an obstacle to American interests.
None of these events involved legislation. None went through a transparent rulemaking process. And none included any mechanism for input from the governments or populations most affected by their consequences.
Yet together, they are shaping the operative rules of AI governance more decisively than any regulation currently on the books.
What this means for privacy and AI governance professionals
For practitioners working in AI governance and data protection, especially in Latin America and the Global South, these developments are not distant geopolitical headlines. They have direct, practical implications.
Procurement is becoming the real regulatory layer. When the terms of AI deployment in the most consequential contexts — military operations, intelligence, critical infrastructure — are set through vendor contracts and security designations rather than public regulation, the governance framework that matters is the one embedded in the contract, not the one in the statute book.
For privacy professionals advising organizations that procure U.S.-built AI systems, this means the compliance baseline they work with may be shaped by terms they have never seen and had no role in negotiating.
Security exceptions are expanding, not contracting. The reported use of Claude in military operations, apparently beyond developer-imposed restrictions, signals something broader than a single procurement dispute.
It suggests a pattern in which national security framing overrides both corporate governance commitments and the technical safeguards AI companies present as evidence of responsible deployment. When security exceptions become the norm rather than the exception, the entire framework of "responsible AI" that governance professionals rely on loses predictive value.
"Free data flows" diplomacy is shrinking your policy space. The State Department cable did not just promote open data transfer, it told diplomats to actively push back against data sovereignty initiatives.
These are the same frameworks that countries across Latin America and the Global South are building right now to give their citizens a say in how AI systems handle their data.
If you are working on data protection policy in the region, the message is clear: your regulatory options are being narrowed — not by law, but by diplomatic pressure and what amounts to digital protectionism in reverse.
The deeper problem: Algorithmic governance dependence
Taken together, these three developments point to a structural challenge that goes beyond any individual policy dispute.
The stakes are enormous. A recent World Economic Forum and McKinsey report estimates AI could raise Latin America's productivity by 1.9% to 2.3% annually and generate USD1.1 trillion to USD1.7 trillion in additional economic value. This is a transformative opportunity for a region where productivity growth has averaged just 0.4% per year over the past quarter century.
But the same report warns that without structural reform, talent development and regional coordination, AI risks becoming another missed technological wave. The critical question the report does not fully address is: on whose terms will that AI adoption happen?
Countries in the Global South that adopt U.S.-built AI systems are not simply acquiring tools. They are inheriting governance logic — which vendors are "trusted," what uses are permitted, what data flows are required — that was defined upstream through contracts, security designations and diplomatic negotiations in which they had no voice.
I call this algorithmic governance dependence: a dynamic in which AI adoption generates productivity gains and modernization, but without technological sovereignty or meaningful participation in the governance decisions that shape how these systems operate. It is a new way of structural dependence, one that operates through code, contracts, imposition of ethical and technical standards and cloud infrastructure, rather than through traditional economic mechanisms, but that echoes dependency patterns the region knows well.
And it is more fragile than it appears: as South Korea's recent warning that the Iran conflict could disrupt semiconductor manufacturing materials illustrates, the physical supply chains underlying AI infrastructure are themselves vulnerable to the very geopolitical instability that procurement-driven governance accelerates.
Initiatives like Latam-GPT, the open-source language model developed by Chile's National Center for Artificial Intelligence with contributions from 15 Latin American countries, show the region is not passively accepting this dynamic. Built with regionally sourced data in Spanish, Portuguese and eventually Indigenous languages, Latam-GPT represents a real effort to build AI infrastructure that reflects local contexts rather than importing Silicon Valley's assumptions.
But it also illustrates the depth of the challenge: until its planned regional supercomputer becomes operational, the model runs on Amazon Web Services. In other words, sovereignty in ambition, dependency in infrastructure.The gap between this regional model and the ecosystems of U.S. AI companies is not just technical. It is structural, and it is precisely the kind of asymmetry that procurement-driven governance deepens.
The practical consequence for privacy and AI governance professionals in the region is stark. You can build the most sophisticated data protection framework in the world, but if the foundational terms of the AI systems deployed in your jurisdiction were set by a Pentagon procurement contract or a State Department cable, your framework is governing the surface while the operating logic runs underneath.
What can be done
This is not a call for despair, it is a call for strategic clarity. Privacy and AI governance professionals, particularly those working in multilateral and regional contexts, can respond in concrete ways.
First, treat procurement as a governance issue, not just an operational one. When governments or institutions in your jurisdiction procure AI systems, the contract terms are governance decisions, and should be subject to the same transparency and accountability standards as any regulatory action.
Second, build regional coordination on AI governance that accounts for upstream power asymmetries. Initiatives like Latam-GPT and regional AI strategies emerging from bodies like the Development Bank of Latin America and the Caribbean suggest that the institutional appetite exists. But coordination must go beyond technical capacity-building. It must include collective frameworks for evaluating and negotiating the governance terms embedded in the AI systems the region adopts, not just compliance checklists designed for the jurisdictions where those systems were built.
Third, insist on auditability as a non-negotiable condition. If AI systems are being deployed in high-stakes contexts under security exceptions that override developer restrictions, the minimum democratic requirement is that those exceptions be documented, reviewable and subject to institutional oversight.
Fourth, don't forget civil society. When institutional oversight is weak or absent, civil society organizations are often the only actors monitoring how AI systems are actually being deployed, giving affected communities a seat at the table, and pushing back when things go wrong. For that to work, they need sustainable funding, real partnerships with regulators and enough independence to challenge the actors they oversee. Without these voices, AI governance risks serving only those powerful enough to set the terms — whether through procurement contracts in Washington or surveillance tools rebranded as sovereign development elsewhere.
The question is no longer whether AI governance will be shaped by geopolitics. It already is. The question is whether privacy and governance professionals will have the tools, the frameworks and the political space to ensure that governance means something more than a contract clause drafted in a language their constituents never agreed to.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Náthaly Calixto
AI governance and geopolitics specialist, independent consultant



