In response to the Draghi report, weakened transatlantic cooperation and an AI arms race between the U.S. and China, the EU has been working fastidiously to bolster its technological independence. As a recent study prepared for the European Parliament concluded, "the EU's reliance on non-European providers for foundational digital infrastructure … makes it inherently vulnerable to geopolitically driven coercion," meaning that political whims could cause unexpected restrictions and disruptions to essential services across the EU. 

Efforts focused on artificial intelligence sovereignty are central to this aim. The concept of sovereignty in this context can refer to the capacity of a nation or region to control the use of its own data, software, hardware and infrastructure to align AI functionality with domestic laws and interests. Given global interdependence and an international AI supply chain, some experts argue that a hybrid model — where sovereign AI is deployed for critical applications like defense or health care, but not necessarily for commercial or research purposes — is most viable. 

The EU is already embracing this hybrid model, pairing regulations such as the AI Act — comprehensive binding rules that reflect sovereign European values and limit reliance on external AI governance models — with more interdependent and non-proprietary mechanisms — narrow, open-source models, international partnerships and federated infrastructure. As such, the EU is pioneering a pragmatic and integrated form of AI sovereignty that governance professionals should watch closely. This model not only diverges from those advanced by the U.S. and China, but it may also prove applicable to a variety of emerging national contexts. 

Existing regulatory levers

The EU is perhaps the world's leader in sovereign AI regulation and governance which it exerts through various binding instruments like the EU General Data Protection Regulation and the AI Act. These regulations give the EU significant control over the data and algorithms that impact European interests and promote necessary, though insufficient, conditions for effective AI sovereignty.

The GPDR: Sovereignty over personal data

The GPDR imposes the sovereign expression of the EU's rights-based approach for processing personal data on entities within its scope, even extraterritorially. This includes entities along the AI supply chain, given that "many provisions in the GDPR are relevant to AI, and some are indeed challenged by the new ways of processing personal data that are enabled by AI."

Of relevance, Article 5 instills several fair information principles into the processing of EU data, including lawfulness, fairness and transparency. Among other obligations, these principles require that covered AI systems disclose the existence and purposes of data processing, provide decisions that are accurate and secure, and only process data upon a legitimate basis. In addition, Article 22 ensures the right to not be subject to automated decision-making that has legal or similar effects on a data subject. This requirement is significant in that it introduces a general prohibition of automated decisions unless they fall within one of the broad, enumerated exceptions.

Notably, Chapter 5 of the GDPR outlines the standards that non-EU jurisdictions receiving EU personal data must meet. Hence, not only must AI developers that are deemed controllers or processors abide by European values, but also the jurisdictions in which those entities are located. 

The AI Act: Sovereignty over algorithms

Similar to the GDPR, the AI Act exerts sovereign controls on algorithms in a comprehensive and extraterritorial manner. Thus, the AI Act imposes its risk-based approach — banning unacceptable risk systems and foisting obligations on high-risk systems and general-purpose models — upon any firm along the AI value chain that wishes to access the EU market. These obligations include accountability, transparency, risk management and cybersecurity. 

As such, the AI Act does not demand proprietary control of an algorithm; instead, it requires that any algorithm within its scope live by a certain set of values meant to protect the privacy, security and safety of EU citizens. Whether this becomes a global standard for algorithms, as the GDPR did for data protection, is less certain. Still, it defines the breadth of EU algorithmic controls and the cost of doing business in the EU internal market. 

Other sovereignty regulations: The Data Governance Act and the Digital Services Act

The Data Governance Act supports the open circulation of data within the EU market while upholding fundamental values like privacy, trust and the altruistic use of data. For example, Article 31 supplements the GDPR data transfer provisions by adding restrictions on cross-border transfers of non-personal data. Covered entities are required to implement "all reasonable technical, legal and organizational measures" to prevent transfers or access to non-personal data that would contravene EU law, such as performing conflict assessments. Subject to international agreements, public sector bodies may permit the international transfer of covered data only if the recipient accepts the jurisdiction of the public body or the European Commission declares that the third country ensures protection of trade secrets and intellectual property equivalent to that of the EU. 

Demonstrating the impact such data control can have on AI development, IBM, which provides AI and cloud solutions, expressed concern that the DGA would "introduce data localisation measures" impacting innovation and investments and urged the European Commission to rely on international agreements, rather than adequacy decisions, to protect intellectual property rights. Likewise, the Information Technology Industry Council was "concerned about the proportionality of [the DGA's] measures, which may restrict the flow of non-personal data outside the EU."  Even the European Parliamentary Research Service recognized that data, in combination with AI, is at the center of digital transformation and that initial proposals of the DGA could impose "onerous rules on international data transfers." These comments go to show the inextricable link between data and AI as well as the disadvantages that data restrictions can impose on AI innovation and development.

The Digital Services Act, which also applies to providers that offer their services to the EU irrespective of their geographical place of establishment, imposes algorithmic requirements including reports on content moderation initiatives, non-deceitful design, prohibition of ad-targeting using children's or sensitive data and disclosure of the main parameters used for recommendation systems. Furthermore, large platforms are required to perform annual assessments on the systemic risks arising from their services. 

Experts have expressed the ramifications of the DSA and its sister act, the Digital Markets Act, outside the EU: "[i]f you're a global company, and you have to deal with new obligations in one very big and crucial market, then similar features could be taken up elsewhere even though there's no hard requirement to do so." In other words, the DSA exports European values, such as fairness and transparency, and expresses sovereignty vis-à-vis algorithms, even to jurisdictions where its provisions are not necessarily applicable. 

Shifting sovereignty sands

Leading the world in data and algorithmic regulation is not enough to ensure effective AI sovereignty. In fact, it is arguably an overreach of a single dimension of sovereignty to the detriment of other dimensions like innovation and competitiveness. The EuroStack initiative supports this conclusion: "[t]o achieve true sovereignty and competitiveness, Europe must pair its legislative leadership with tangible advancements in independent, energy-efficient, and secure data infrastructure." 

And as the "European Software and Cyber Dependencies" study describes, the EU still lacks vital infrastructure capacities, large-scale revenue outlets for frontier AI models, control over advanced AI chips, sufficient data availability, a robust talent pipeline and established investment channels to power these pillars of an AI ecosystem. Several proposals seek to address these shortcomings in the EU's AI sovereignty puzzle.

First, a regulatory rethink

In November 2025, the EU Digital Omnibus package included a proposal to simplify the GDPR, including an expansion of the legitimate interests legal basis to include specific instances of training and deployment of AI models. If implemented, this change would arguably limit EU sovereignty over personal data in the context of AI development while simultaneously increasing the utility of EU personal data and enhancing the competitiveness of AI developers. The proposal also clarified when automated decision-making is permissible under the GDPR, although it did not add further limits to the existing Article 22 right.  

Furthermore, the Digital Omnibus on AI Regulation Proposal has several sovereignty implications. For example, it would permit a broader set of AI systems to process sensitive data for the purpose of bias protection, lessen AI literacy obligations, nix a registration requirement for high-risk systems and delay implementation of rules for high-risk systems by a maximum of 16 months. These proposed changes highlight the tension between algorithmic transparency and protection of personal information and shift responsible development obligations away from the developers themselves. In sum, the proposals loosen the EU's sovereign control over algorithms and data, opting for a more voluntary approach that aims to stoke competitiveness and innovation. 

Developing sovereign cloud and AI factories

The "European Software and Cyber Dependencies" study recommends leveraging the EU's growing federated and interoperable cloud infrastructure, such as initiatives like GAIA-X and 8ra. These efforts "aim to create a federated architecture where multiple providers offer services under shared standards and governance" and are seen as a viable and competitive countermeasure to non-EU hyperscalers. Likewise, a hybrid model of cloud infrastructure is suggested; in this model, U.S. and European clouds are combined so that critical data can stay localized while compute-intensive analytics and AI run on hyperscale platforms. 

On the other hand, the EuroStack initiative proposes decentralized cloud and edge computing infrastructure to create a scalable, flexible, unified cloud framework fully within European control. This framework would cater to critical sectors like health care, energy and manufacturing, ensuring essential services receive reliable storage and compute resources.  

Relatedly, the European Commission forged the AI factories initiative as part of the 2024 AI Innovation Package, comprising a network of computing power, data and talent to develop cutting-edge AI models and applications. By 2026, a minimum of 15 factories is expected to be operational, tripling compute capacity on the continent and enabling greater access for startups and small and medium enterprises. Moreover, the InvestAI fund will support five AI gigafactories, each dedicated to the development of next-generation AI models.                                                                                                                                                                                                                                                                                                               

Narrow models too

To maximize reliability of models and resource efficiency, the "European Software and Cyber Dependencies" study recommends focusing on "smaller, specialised AI models or specific AI improvements that are cheaper to train and deploy, can be tailored for industrial use and require less computing infrastructure." This translates to domain-specific, sector-specific or expert systems with a narrow but deep corpus that are effective within a particular area but not as resource intensive as more generalized models. Moreover, these bespoke models are not meant to compete with frontier-scale models as general purpose tools but rather as specialized solutions with expertise in designated domains. 

The EuroStack initiative puts forth a complementary approach, powering critical sectors like mobility, health care, education and climate monitoring with public, scalable models supported by localized, smaller-scale models. Utilizing so-called "composite learning," this hybrid approach preserves privacy and data sovereignty while driving public interest goals and innovation. 

Open-source and collaborative partnerships

According to "European Software and Cyber Dependencies," open-source projects seek to engender transparency and mitigate reliance on proprietary platforms. These endeavors also draw top talent from around the world who are interested in transparent and collaborative work. Likewise, the EuroStack initiative advocates open-source models and public domain data as a means to democratize access to AI tools, ensuring availability for SMEs, public institutions and media organizations. Open-source software also enables tailoring for European languages, industries and use cases. 

Furthermore, the Parliament study highlights partnerships between the EU, Canada, Japan and African countries to enhance access to sovereign clouds, compute hardware, AI, semiconductors and fast and secure networks. The EuroStack strategy notes international cooperation as well, promoting alliances with Brazil, Chile, South Korea and Taiwan to address supply chain vulnerabilities. 

These reports also stress public-private partnerships as a way to optimize R&D funds, set open and interoperable governance standards and nurture cross-sector collaboration. In addition, the EU Council's 2025 Strategic Foresight Report sees such partnerships as a pathway to deploying AI as public goods, including digital platforms and automated public services. 

Surveying the emerging landscape

Based on these measures, it is clear the EU is blazing a holistic approach to AI sovereignty. As the EuroStack initiative states, "isolationism and protectionism are counterproductive to innovation, and sustainable and inclusive growth." Instead, the bloc is contemplating openness, knowledge-sharing and international alliances to co-develop and co-govern AI technologies. The exact contours of this hybrid sovereignty have yet to be determined, but at the very least, it rejects fragmentation for integration, harnessing the efficiencies of a global supply chain while establishing the minimum capabilities and controls necessary to protect its domestic interests.  

To map practical steps toward this digital future, the EU is considering a reinvigorated industrial policy that can support and scale greater autonomy. This entails a transition from a consumer to a producer economy led by public procurement of vertically integrated digital goods manufactured by EU-based companies. It means enhanced collaboration amongst public-private partnerships to ensure adequate investment and knowledge-sharing. It also means embracing sustainability and renewable resources as a means to power digital infrastructure. And it means leveraging distributed resources across the region for the collective good, strengthening bonds between the member states to enhance the well-being of the bloc.  

What this means for digital governance professionals

Of course, the proposed regulatory simplification endorsed by the Digital Omnibus is top of mind for many professionals in this space. While these proposals have a long and likely winding legislative road before becoming law, they represent a strong signal that European policymakers are interested in reducing regulatory burdens on developers and deployers of AI systems, particularly SMEs and small mid-cap companies.

At the same time, the simplification efforts aim to foster innovation by, for example, introducing AI regulatory sandboxes for the controlled validation of cutting-edge systems, including improved cooperation between member state sandboxes. It will be important to track which legislation is directly impacted by simplification amendments and which may be indirectly affected; for example, the DSA, while not a primary target of the simplification campaign, prompts significant interplay with the GDPR and may be impacted by changes to that law. What's more, governance professionals should seek to alert teams throughout their organization that streamlined regulation may lead to new opportunities.

The EU's holistic approach to AI sovereignty emphasizes open-source software, specialized models and strong governance mechanisms. Open-source systems enable innovation, expand access to digital solutions, ensure user agency, deter proprietary control and can be customized to particularized applications that meet cultural or sectoral needs. Specialized models are more resource-efficient, can be consolidated through federated learning, can compete with frontier models in specified domains, and promote privacy, security and reliability. Finally, strong governance leads to public trust and adoption, enables shared and interoperable standards across sectors and borders and reinforces the ability to adapt digital infrastructure to a shifting technological landscape.

This open and federated approach runs counter to the proprietary and general-purpose approach heralded by many of the U.S. frontier models. Therefore, it will require revised development and deployment frameworks amongst governance professionals that appreciate the limits of open-source software and the nuance of decentralized systems responsible for narrow tasks.

Commentators are adamant about tangible progress, not just "discussing noble principles in detail over coffee." Governance professionals should be equally attuned to the realignment of EU policy, one that builds rather than buys. As European tech becomes increasingly available, it will increase competition in the marketplace, force others to innovate and alter considerations for governance professionals throughout the AI life cycle.  

Conclusion

The EU's unique sovereignty ambitions rest on the decentralized and cooperative development and diffusion of AI systems. This strategy represents a paradigm shift away from the dependency and vendor lock-in models exercised by technology firms in the U.S. and China. Moreover, there is a novel effort within the EU to balance the rights of individuals with the innovative capacity and competitive resilience of the EU marketplace. 

As such, digital governance professionals should remain aware of the EU's emerging frontier in AI sovereignty — one that is not isolationist but empowers stakeholders through shared governance, subsidiarity and solidarity, leveraging domestic strengths and fostering reciprocal alliances to spur innovative, sustainable and inclusive AI growth within its borders. Of course, this approach comes with its own risks; it may increase dependence on other nations with their own sovereignty ambitions and introduce new security vulnerabilities through federated infrastructure and open software. Still, as nations vie for digital autonomy, this hybrid and flexible approach to AI sovereignty is likely to serve as a blueprint for emerging actors and to be an important trend professionals should track. 

Will Simpson, AIGP, CIPP/US, is a Westin Fellow for the IAPP.