The Illusion of Technological Self-Sufficiency in the Age of Artificial Intelligence
Executive Summary
The Myth of AI Sovereignty: Even Superpowers Will Find It Impossible to Own the Entire Supply Chain
The pursuit of AI sovereignty — the ambition of nations to develop, own, and control the full technological stack underpinning artificial intelligence — has become the defining strategic obsession of the 21st century.
From Washington's $500 billion Stargate initiative to Beijing's trillion-Yuan semiconductor self-sufficiency drive, from the European Union's landmark AI Act to India's homegrown large language models, every major power is investing extraordinary resources in the belief that technological independence can be engineered through political will and public capital.
FAF argues that this belief is, at its core, a myth.
Not because AI sovereignty is entirely unattainable, but because the structural realities of modern technological production — the depth of global supply chains, the uneven distribution of scientific talent, the concentration of rare earth minerals, and the cumulative nature of technological learning — make full sovereignty not merely costly but self-defeating.
The very openness and interdependence that made today's AI systems possible are precisely what sovereign ambitions now threaten to unravel.
FAF research traces the history of this ambition, analyzes current efforts across the major power centers, examines cause-and-effect dynamics, and outlines what a more realistic and durable framework for technological resilience might look like.
Introduction: The Strategic Seduction of Control
There is a recurring fantasy in international relations: that power, once identified, can be owned. For centuries, nations sought to control territory. In the 20th century, they sought to control energy.
Today, they seek to control artificial intelligence. The logic appears impeccable.
AI systems are reshaping military capabilities, economic productivity, public administration, and the architecture of global influence. Whoever controls the foundations of AI — the chips, the data, the models, the infrastructure — controls the commanding heights of 21st-century power.
The quest for AI sovereignty, therefore, feels less like an ambition than a necessity.
But controlling the foundations of AI is not like controlling a port or an oil well.
It requires mastery of an extraordinarily complex, globally distributed production system that took decades and trillions of dollars to build.
The semiconductor industry alone draws on supply chains spanning over 50 countries, incorporating materials mined in Africa, processed in China, designed in California, and fabricated in Taiwan.
The AI model ecosystem rests on data accumulated across global digital networks, computation concentrated in a handful of hyperscale facilities, and algorithmic innovations produced by researchers trained at institutions on every continent.
The idea that any single nation — however powerful — can replicate this entire architecture within its borders is not a strategic ambition but a geopolitical delusion.
The United States is spending nearly $12 billion to replicate Taiwanese advanced chip production in Arizona, and the results are already instructive.
By the time all its fabrication plants are fully operational, TSMC's Arizona facility will be producing chips a generation behind its operations in Taiwan, operating at a 4-nanometer node.
At the same time, Taiwan advances toward 2-nanometer and beyond.
Even when you move the factories, you cannot move the learning curves.
The same logic — transposed to the vastly more complex challenge of full-stack AI sovereignty — applies with even greater force.
History and Context: How the AI Sovereignty Doctrine Was Born
The intellectual genealogy of AI sovereignty traces back not to the age of artificial intelligence but to an older anxiety: the vulnerability of strategic industries to foreign disruption.
The experience of the 1970s oil embargo hardwired into Western strategic thinking the lesson that dependence on foreign suppliers of critical resources was an existential liability.
The application of this logic to technology accelerated after 2010, when it became clear that the semiconductor and digital infrastructure industries were developing geographic concentrations as extreme as anything seen in petroleum.
The specific articulation of "AI sovereignty" as a policy doctrine emerged from three converging crises.
First, the 2018 U.S.-China trade war revealed that Chinese telecommunications firms like Huawei were deeply embedded in Western infrastructure and that Chinese technological advancement was proceeding faster than Western analysts had assumed.
Second, the COVID-19 pandemic exposed the fragility of globalized supply chains, including semiconductor shortages that paralyzed the automotive and electronics industries from 2020 to 2023.
Third, the explosive emergence of generative AI following the public release of ChatGPT in late 2022, which demonstrated with sudden clarity that AI systems had crossed a threshold of capability sufficient to reshape entire sectors of the economy and potentially military strategy.
These three crises produced a convergence of political will across ideological lines.
In Washington, the CHIPS and Science Act of 2022 committed approximately $52 billion in direct subsidies to domestic semiconductor manufacturing, including the TSMC Arizona project.
In Brussels, the European Chips Act of 2023 pledged €43 billion to reduce Europe's dependence on Asian fabrication.
In Beijing, the state's Big Fund for semiconductor development had already channeled hundreds of billions of yuan into domestic chip production.
In New Delhi, the IndiaAI Mission, launched in March 2024 with a $1.1 billion budget, set out to build indigenous compute infrastructure and foundational AI models.
Each of these initiatives was framed, explicitly or implicitly, in the language of sovereignty — the insistence that no nation could afford to leave its most strategically vital technology in foreign hands.
Current Status: The Architecture of a Fragmented Landscape
The United States: Controlling Choke Points, Not Supply Chains
The world ian n early 2026 is one in which the doctrine of AI sovereignty has moved from political rhetoric to active industrial policy.
Yet the practical results reveal the doctrine's inherent contradictions in real time.
The United States' AI strategy has evolved significantly from early instincts toward full-stack independence.
Washington's current approach accepts that advanced manufacturing may occur partly abroad while ensuring that the most sophisticated AI systems are built on U.S. platforms, trained with U.S. tools, and deployed through U.S. companies.
This strategy recognizes that American dominance never derived from owning every component but from controlling key choke points: software frameworks, cloud infrastructure standards, chip architectures, and export regimes.
The Stargate initiative, announced by President Trump on January 21st, 2025, and backed by OpenAI, Oracle, SoftBank, and MGX with an initial $100 billion commitment toward a total of $500 billion over four years, exemplifies this approach.
Stargate is simultaneously a sovereignty project and an interdependence project: it concentrates AI infrastructure on U.S. soil while remaining structurally dependent on Japanese capital (SoftBank), Emirati investment (MGX), and supply chains that run through Taiwan, South Korea, and the Netherlands.
The sovereign label covers an architecture that is, beneath the surface, deeply international.
The Pax Silica initiative, launched by the U.S. State Department in December 2025, represents perhaps the most sophisticated expression of this choke-point strategy.
Rather than attempting to replicate the full silicon stack domestically, Pax Silica seeks to construct a trusted coalition of allied producers — spanning critical minerals, semiconductor fabrication, and AI deployment — that collectively reduces dependence on China-dominated segments of the supply chain.
India formally joined Pax Silica by signing the declaration at the India AI Impact Summit in New Delhi in February 2026, alongside a bilateral trade framework that reduced U.S. reciprocal tariffs on India from 25% to 18%.
The alliance envisions distributing the burden of technological sovereignty across a network of strategically aligned nations rather than concentrating it within a single country.
Yet even this more sophisticated approach faces structural limits. Export controls on advanced chips — most prominently restrictions on Nvidia's H100 and subsequent GPU lines — have accelerated Chinese domestic chip development rather than suppressing it.
Huawei's unveiling of a 3-year Ascend AI chip roadmap at the Huawei Connect 2025 conference in Shanghai in September 2025 demonstrated that U.S.-sanctioned entities could develop competitive AI hardware timelines, with the Ascend 950PR targeted for the first quarter of 2026 and the Ascend 960 and 970 series planned through 2027 and 2028.
The unintended consequence of the choke-point strategy is that it has motivated precisely the self-sufficiency effort it sought to prevent.
China: The Paradox of Forced Sovereignty
Beijing's Trillion-¥ Push for Chip Independence Is Generating Capability While Exposing the Limits of State-Directed Technological Development
China's pursuit of semiconductor self-sufficiency has been the most dramatic and instructive experiment in AI sovereignty conducted by any nation.
Launched under the "Made in China 2025" initiative in 2015 and massively accelerated by U.S. export controls after 2019, Beijing's campaign has channeled vast state capital — through the Big Fund and associated investment vehicles — into domestic chip design and fabrication across the entire value chain.
The results are genuinely impressive by historical standards. In March 2026, it emerged that Hua Hong Group's contract chipmaking subsidiary, Huali Microelectronics, was preparing a 7-nanometer fabrication process at its Shanghai facility, making it the 2nd Chinese chipmaker after SMIC to reach this node.
China's largest chipmaker, SMIC, had already demonstrated 7-nanometer production capability that appeared in Huawei's Kirin processor, prompting significant reassessment of Chinese industrial policy's effectiveness.
The Huawei Ascend AI chip roadmap, unveiled at the Shanghai Huawei Connect 2025 conference, signals that China intends to field a competitive multi-generation AI hardware offering, with the Ascend 950DT and subsequent Ascend 960 and 970 series targeting training and inference workloads through the end of the decade.
Yet the paradox of Beijing's sovereignty drive is visible in the very hardware it produces.
In late 2025, analysis of Huawei's Ascend AI processors found advanced components sourced from TSMC, Samsung, and SK Hynix, confirming that even China's most symbolically significant domestic AI chip products remain structurally dependent on the global supply chains they were explicitly designed to replace.
China commands approximately 60% of global rare earth extraction and approximately 85% of rare earth refining capacity — a near-total dominance over the materials that underpin every semiconductor produced anywhere in the world — yet it cannot independently produce the extreme ultraviolet (EUV) lithography machines manufactured exclusively by ASML in the Netherlands that are required for sub-7-nanometer production.
The gap between China's seven nanometer ceiling and the global frontier — now advancing toward 2-nanometer and beyond in Taiwan and South Korea — is therefore not merely a technical lag.
It is a structural constraint embedded in the geopolitics of equipment manufacturing, which no amount of domestic investment can rapidly overcome.
Washington and Beijing are both discovering that pursuing full sovereignty means potentially sacrificing the advantages that made them competitive in the first place.
For Beijing, that means sacrificing scale and rapid implementation. For Washington, it means sacrificing openness and innovation speed.
Europe: Regulatory Sovereignty as a Substitute for Industrial Sovereignty
The European Union's Hybrid Model Prioritizes Data Governance and Regulatory Standards Over Fabrication Independence
The European Union's approach to AI sovereignty is fundamentally distinct from those of the United States and China, and arguably more honest about its own constraints.
The EU does not possess indigenous hyperscale cloud providers, leading-edge chip fabricators, or frontier AI model developers of the first tier.
It has, instead, constructed a sophisticated regulatory architecture that seeks to define the terms on which external AI systems operate within European jurisdiction — and, by establishing standards that global firms must meet to access the single market of 450 million consumers, to exercise regulatory sovereignty where industrial sovereignty is structurally unavailable.
The AI Act, which entered into force on August 1st, 2024, and the accompanying General Data Protection Regulation framework together constitute the most comprehensive binding AI governance regime in the world, establishing obligations across risk categories, data handling, algorithmic transparency, and human oversight.
Yet by November 2025, internal EU pressure was already producing a significant revision through the Digital Omnibus package, which proposed simplifying the GDPR's consent framework for AI training data and delaying implementation of high-risk AI provisions by up to 16 months, with some rules potentially pushed to December 2027.
This internal revision is deeply revealing.
An Accenture survey conducted in late 2025 found that 65% of European organizations acknowledged they could not remain competitive without non-European technology providers, and that only 36% of their AI initiatives and data actually required sovereign treatment under regulatory or sensitivity criteria.
The data suggests that European sovereign ambitions, even when framed as regulatory rather than industrial, face a structural tension between protecting data sovereignty and maintaining access to the global innovation ecosystem that produces the most capable AI systems.
The European model may represent, as one analysis noted, a "pragmatic and integrated form of AI sovereignty" — but pragmatic sovereignty is, by definition, partial sovereignty, built on acknowledged interdependence rather than the fiction of independence.
India and the Global South: Sovereignty as Development, Not Domination
India's Sovereign AI Models and Pax Silica Membership Reveal a 3rd Path Between Great-Power Competition and Technological Dependency
India's emergence as a serious stakeholder in the AI sovereignty landscape represents perhaps the most significant geopolitical development in this space since the U.S.-China contest hardened after 2018.
India launched four sovereign AI models — developed by Sarvam AI, BharatGen, Gnani, and Socket — at the IndiaAI Impact Summit 2026 in New Delhi in February 2026, under the IndiaAI Mission backed by $1.1 Billion approved in March 2024.
The models are trained on Indian-language datasets, hosted on Indian servers, and designed explicitly to reduce dependence on global AI platforms while delivering AI applications tailored to India's linguistic and developmental diversity.
What distinguishes India's sovereign AI project from those of the major powers is its explicit framing as a development instrument rather than a dominance vehicle.
The India AI Impact Summit 2026, the first such global governance forum held in the Global South, convened 88 signatory nations, of which 75% were from the Global South, around frameworks for interoperable sovereign AI and cross-border digital public infrastructure.
This framing reflects India's dual position: as an emerging technological power with genuine engineering depth and a rapidly growing domestic AI market, and as a representative voice for nations that want the benefits of AI without the dependency relationships that currently underpin AI development.
India's formal adhesion to the Pax Silica initiative in February 2026 complicates this independent narrative.
By joining the U.S.-led coalition spanning critical minerals, semiconductor fabrication, and AI deployment, India has positioned itself as a strategic component of a U.S.-organized technological supply chain, not merely a sovereign technology developer.
India's participation is described by U.S. officials as essential, citing its engineering talent, mineral processing capacity, and strategic position.
The practical implication is that India's sovereign AI is being built partly on a foundation of U.S.-aligned coalition strategy, illustrating the structural difficulty of achieving technological independence outside of great-power alliance structures.
Key Developments: The Material Constraints of Sovereignty
From Rare Earth Geopolitics to EUV Monopolies, the Physical Infrastructure of AI Defies Political Borders
The debate over AI sovereignty often proceeds as though the only constraint were political will and capital.
The material realities are more unforgiving. Semiconductor fabrication at the leading edge requires extreme ultraviolet lithography machines produced exclusively by ASML in the Netherlands, drawing on light sources developed by Cymer (a U.S. subsidiary), optics from Zeiss (Germany), and supply chains that span roughly 800 specialized suppliers across multiple continents.
No sovereignty initiative anywhere in the world — not even one backed by the full resources of China's state — has succeeded in replicating this system domestically. China's 7-nanometer ceiling is, in significant part, an EUV ceiling.
The rare earth dimension adds another layer of structural constraint.
China commands approximately 60% of global rare earth mining and approximately 85% of global refining capacity, giving Beijing a near-monopoly over the materials that underpin every semiconductor and advanced magnet system produced worldwide.
This dominance is not merely a function of geological luck — China possesses roughly 35% of known reserves — but of decades of deliberate industrial policy that developed the processing infrastructure that other nations lack.
U.S. and Indian efforts to diversify rare earth supply chains, including bilateral cooperation under Pax Silica, have produced meaningful progress but confront the reality that building processing infrastructure takes 10 to 15 years.
The energy and physical infrastructure demands of AI data centers introduce yet another sovereignty constraint that is reshaping geopolitical geography in real time.
IDC's 2026 FutureScape predictions forecast that by 2028, enterprises splitting AI infrastructure across sovereign zones will face costs that triple relative to a unified global architecture.
The "Sovereign AI Stack" is therefore not merely a technological choice but an economic penalty — one that smaller and middle-income economies will feel far more acutely than the great powers, yet one that all participants in the sovereign AI project are beginning to understand.
Cause-and-Effect Analysis: The Unintended Consequences of Sovereignty Doctrine
The More Nations Pursue Technological Independence, the More They Entrench the Interdependencies They Sought to Escape
The cause-and-effect dynamics of AI sovereignty policy reveal a set of ironies that deserve extended analysis.
The most significant is what might be called the acceleration paradox: U.S. export controls designed to suppress Chinese AI chip development have, by forcing Beijing to invest in domestic alternatives, catalyzed Chinese semiconductor advances that would not otherwise have occurred on this timeline.
Huawei's Ascend AI chip roadmap, SMIC's seven nanometer production capability, and Hua Hong's emerging seven nanometer process are all direct consequences of the U.S. technology restriction regime.
Sanctions designed to enforce dependency have produced investment in independence.
A second irony is the fragmentation cost. As sovereign AI architectures proliferate, the global AI ecosystem that produced today's most capable systems — characterized by open data flows, collaborative research, and shared computational infrastructure — is being partitioned into increasingly incompatible national silos.
IDC projects that multinational enterprises will face AI infrastructure costs that triple as they navigate sovereign zones by 2028.
The paradox is that each nation's investment in sovereignty contributes to an aggregate fragmentation that makes the entire global AI ecosystem less productive, less innovative, and ultimately less sovereign in the sense of delivering the national benefits that sovereignty was supposed to provide.
A third dynamic concerns the talent dimension, which is perhaps the most decisive and least discussed.
AI development at the frontier is a function of a relatively small number of extraordinarily specialized researchers, most of whom were trained at a handful of institutions and many of whom have migrated across national borders multiple times in their careers.
Sovereignty initiatives that restrict immigration, require security clearances for AI researchers, or mandate that work be conducted on classified domestic systems inevitably sacrifice access to the global talent pool on which frontier AI development depends.
The TSMC Arizona workforce shortage — which delayed the facility's production timeline — illustrates the material consequences of trying to localize a labor force that is by nature globally distributed.
The fourth dynamic concerns the relationship between sovereignty and standardization. A world fragmented into sovereign AI systems is a world of incompatible standards — for model architectures, data formats, inference APIs, and safety evaluation frameworks.
This incompatibility imposes coordination costs on every cross-border application of AI, from financial systems to medical research to climate modeling, that dwarf the security benefits that sovereignty advocates typically quantify.
The EU's effort to ensure interoperability between its AI Act framework, India's light-touch guidelines, and U.S. executive orders is a recognition of this problem, but it remains unresolved in practice.
Future Steps: Toward a Realistic Architecture of Resilience
The Alternative to Sovereignty Illusions Is a Principled Framework of Strategic Interdependence Built on Alliance, Resilience, and Managed Dependency
The analytical framework that emerges from examining the failures and partial successes of AI sovereignty doctrine suggests that the most effective national AI strategies will be those that consciously choose what to build, what to buy, and where partnerships generate more value than unilateral investment.
The Atlantic Council's 2026 analysis articulates this principle clearly: not every country can or should try to build every part of the AI stack on its own, and attempting to recreate everything from data centers to models is expensive, redundant, and impractical.
The Pax Silica model, for all its geopolitical complications, represents a more mature conception of technological security than full-stack sovereignty.
By constructing a trusted network of allied producers spanning the critical minerals, fabrication, and deployment layers of the AI stack, it converts bilateral dependencies into multilateral resilience — no single point of failure, but no fantasy of independence either.
The challenge for the initiative is to move beyond U.S.-centric framing and develop governance mechanisms that give smaller coalition members genuine voice in setting standards, allocating investment, and managing disputes.
For middle powers and Global South nations, the critical strategic question is not how to replicate the AI infrastructure of the great powers but how to build the specific sovereign capabilities — data governance, domain-specific models, compute access, and regulatory capacity — that address their most pressing developmental needs without foreclosing participation in the global AI ecosystem.
India's approach, combining domestic foundational model development with coalition membership and regulatory openness, comes closest to this ideal, though it remains constrained by compute infrastructure gaps and the continuing dominance of global platform providers.
The deeper transformation required is conceptual rather than industrial.
Nations that continue to frame AI sovereignty in terms of supply chain ownership will continue to invest heavily in projects that are simultaneously too expensive, too slow, and ultimately too fragile to deliver the security they promise.
The TSMC Arizona facility spending $12 billion for chips that will be a generation behind Taiwan's production line is not a story of insufficient ambition; it is a story of misplaced ambition.
Nations that reframe AI security in terms of resilience, alliance architecture, and strategic choke-point management will find themselves better positioned to navigate a technological landscape that no single sovereign power can control.
Conclusion: The Permanence of Interdependence
The myth of AI sovereignty is not a myth because nations are strategically confused or technologically naive.
It is a myth because the production of frontier artificial intelligence systems is, at its structural core, a globally distributed enterprise.
The learning curves, the material dependencies, the talent networks, the equipment supply chains, and the research ecosystems that collectively produce AI capability took the better part of six decades and the collaborative investment of every major industrial economy to build.
The idea that any nation — however powerful, however well-capitalized, however strategically determined — can replicate this architecture within its borders in a decade or two is not strategic ambition. It is a category error.
What remains genuinely achievable is something more modest and arguably more durable: strategic resilience built on alliance architecture, choke-point leverage, domestic capacity in critical segments, and the kind of open innovation ecosystems that produced AI superiority in the first place.
The nations and coalitions that understand this distinction — between owning AI and being positioned to benefit from it — will be the ones that actually shape the trajectory of this technology.
The ones that do not will spend extraordinary resources building factories that are a generation behind, models that cannot scale, and regulatory systems that protect sovereignty on paper while surrendering competitive advantage in practice.
In the age of artificial intelligence, as in every age before it, the greatest geopolitical error is to mistake control of the symbol for control of the substance.


