India’s AI Gamble: Sam Altman, Sovereign Compute, And Global Power - Part IV
Executive summary
Sam Altman in New Delhi: India’s AI Summit, Power, and Peril
Sam Altman’s appearance at the India AI Impact Summit 2026 in New Delhi marked a turning point in the relationship between a fast‑rising AI power and the firm that has come to symbolize frontier models.
At Bharat Mandapam, in a summit inaugurated by the Indian prime minister and framed as the first truly global AI gathering hosted in the Global South, Altman praised India as “well positioned to lead” and “shape” the future of advanced AI, arguing that democratization of AI is the only fair and safe path, while warning that centralization of compute and models in a single country or company could end in ruin.
Around that speech, OpenAI unveiled “OpenAI for India,” a strategic package that includes a long‑term partnership with Tata Group and Tata Consultancy Services (TCS) to build domestic AI data‑center capacity starting at 100 megawatts and potentially scaling to one gigawatt, along with enterprise deployments of ChatGPT and Codex tools across hundreds of thousands of Tata employees.
In parallel, the Indian government showcased a sovereign AI roadmap: a shared compute facility with more than 38,000 GPUs, a pipeline of AI‑optimized data centers that could attract up to $200 billion in investment, and government‑backed language models spanning twenty two Indian languages.
Altman’s remarks and OpenAI’s deals intersect with India’s own attempt to craft a “third way” in AI governance.
New Delhi is positioning itself between a deregulated, private‑sector‑led model in the United States, a state‑directed and tightly controlled Chinese model, and the compliance‑heavy regulatory approach emerging in Europe.
India’s answer combines digital public infrastructure, techno‑legal governance using existing laws, and an explicit appeal to the Global South through the MANAV framework, which emphasizes moral systems, accountable governance, national sovereignty over data, accessibility, and legal validity.
Yet this apparent alignment masks tensions. Altman openly spoke of being only “a couple of years” away from early superintelligence, insisting that humanity will need superintelligence itself to help design new governance mechanisms.
Indian leaders spoke of democratization and inclusion, but simultaneously invited a wave of hyperscale investments from U.S. platforms that risk reinforcing structural dependence on foreign compute, capital, and chips.
The cause‑and‑effect chain running from Altman’s Delhi talk is therefore double‑edged. In the near term it accelerates India’s attempt to become an AI hub for the Global South, anchors OpenAI inside India’s regulatory and infrastructure landscape, and deepens the U.S.–India technology corridor.
Over time, however, it could entrench a new hierarchy in which Indian data, talent, and market access are traded for foreign control of the most advanced compute and models.
Whether Delhi can convert this moment into durable strategic autonomy, rather than a refined digital dependency, will depend on how it sequences regulation, infrastructure, open models, and industrial policy over the next five to ten years.
Introduction
OpenAI Meets Bharat Mandapam: India’s Data, America’s Chips, Shared Destiny
The India AI Impact Summit 2026 in New Delhi was never intended as a routine tech conference.
Held at Bharat Mandapam from 16th to 20th February and billed as the most consequential AI summit yet hosted in the Global South, it was designed to signal that the geography of AI power is no longer confined to the North Atlantic and East Asia.
Heads of government and cabinet ministers shared the stage with the chief executives of OpenAI, Google, Nvidia, Anthropic, and Tata, while the final outcome document, though voluntary, aimed to set norms on access to compute, data governance, and sovereign AI for a generation.
Into this theatre walked Sam Altman. His New Delhi visit, his first to India in roughly one year, coincided with a decisive pivot by OpenAI toward global infrastructure build‑out and sovereign‑flavoured partnerships.
Even before the summit opened, reports revealed that OpenAI was arranging side‑events, closed‑door meetings with Indian officials and venture capitalists, and a stand‑alone OpenAI event in the capital on 19th February.
Altman’s speech fused technical optimism, strategic caution, and political messaging.
He stressed that since his last visit, frontier systems had moved from struggling with high‑school mathematics to conducting original research. He repeated his belief that democratization of AI—widespread, affordable access to powerful models—is the only fair path, warning that centralization of AI capacity “in one company or country” could be catastrophic. He argued that India, as the world’s largest democracy, is uniquely placed not only to build AI but to decide “what our future is going to look like.”
FAF analyses delves deeper into Altman’s intervention as more than a corporate keynote.
It treats the Delhi speech and the associated OpenAI–Tata agreements as a moment in the restructuring of the global AI order: a test of India’s claim to lead a more inclusive governance model, a data point in the deepening U.S.–India technology partnership, and a revealing example of how private AI firms now operate as quasi‑geopolitical actors.
History And Current Status
India’s AI Ascent: OpenAI’s Bold Bet on the Billion-Strong Market
India’s current AI moment is the product of three intertwined evolutions: the rise of digital public infrastructure, the deliberate reframing of AI as a development and governance tool, and the gradual elevation of India within U.S. technology strategy.
Over the past decade, India built a layered “stack” of digital public infrastructure—identity (Aadhaar), payments (UPI), and data‑sharing rails—that now supports everything from welfare transfers to credit, insurance, and telemedicine.
This architecture, recognized during India’s G20 presidency, has become a model for parts of the Global South that lack legacy financial or administrative systems but seek low‑cost, open, and interoperable digital rails.
The same stack now underpins India’s AI ambitions, particularly for language models and public‑sector AI applications.
Second, India has progressively framed AI as a lever of inclusive growth, not only as an industrial policy tool. Government documents and guidelines emphasize “safe and trusted AI for all,” a motto that blends welfare concerns, techno‑legal regulation, and geopolitical messaging.
Rather than rushing to a single omnibus AI statute, India’s 2025 AI Governance Guidelines advocate using existing sectoral laws to manage AI risks, with targeted amendments where necessary—offering a low‑cost, capacity‑sensitive template for other developing countries.
Third, India’s AI rise is anchored in its tightening technology relationship with the United States.
Frameworks such as the Initiative on Critical and Emerging Technology (iCET) and follow‑on roadmaps have elevated semiconductors, AI, quantum, and secure data flows to the centre of bilateral ties.
U.S. officials now describe India as a “highly strategic partner” in securing global AI and semiconductor supply chains, and recent initiatives like Pax Silica explicitly contemplate India as a core node in a democratic compute and chip ecosystem.
OpenAI, for its part, has watched India become one of its largest user bases. India reportedly hosts tens of millions of weekly ChatGPT users; Altman himself has pointed to India as one of OpenAI’s fastest‑growing markets, with heavy adoption by students, teachers, developers, and small firms. In 2025 OpenAI opened an office in New Delhi, and by early 2026 it had begun to formalize a broader “OpenAI for India” program, premised on infrastructure, skills, and local partnerships.
By the time of the Delhi summit, the Indian state and OpenAI were therefore already entangled in a shared project: to convert India’s data richness and talent base into AI infrastructure, and to turn India’s governance experiments into a Global South template. Altman’s talk and the deals around it must be read against this backdrop.
Key Developments
India AI Impact Summit 2026
The India AI Impact Summit compressed into 5 days a decade’s worth of initiatives, before and after Altman’s appearance.
On the governmental side, the prime minister used the keynote to unveil the MANAV framework, a five pillar vision for human‑centred AI: moral and ethical systems, accountable governance, national sovereignty over data, accessible and inclusive design, and valid and legitimate uses. He warned against a future in which humans become “mere data points,” insisting that AI must remain a multiplier for inclusion, particularly in the Global South.
India also announced concrete elements of a sovereign AI stack. Under the IndiaAI Mission, the government has committed roughly $1.24 billion to building a comprehensive AI ecosystem, including shared compute clusters, datasets, and model development.
At the current exchange rate—1 $ ≈ 91.07 Rs as of 19th February 2026—that allocation is equivalent to approximately $1.14 -$1.24 billion.
Officials highlighted that a national GPU facility with more than 38,000 GPUs is already operational, giving startups and public institutions access to high‑end compute without prohibitive upfront capital outlays.
On the model layer, the summit showcased BharatGen Param2, a 17‑billion‑parameter Indian language model supporting 22 languages, alongside privately built Indic models from startups such as Sarvam AI and Krutrim.
These models do not yet match the sheer scale of frontier systems, but they are optimized for local contexts, lower costs, and data sovereignty.
The summit also served as a magnet for private capital. Tata Group announced a partnership with OpenAI to build a 100‑megawatt AI‑optimized data center, to be scaled to 1 gigawatt under TCS’s HyperVault unit. Industry estimates suggest that a one gigawatt data‑center campus can cost between $35 billion and $50 billion, depending on land, energy, and cooling.
In parallel, Google reiterated plans to invest $15 billion in an AI hub in India, Microsoft pointed to a $17.5 billion data‑center program, and Amazon confirmed its own multi‑year $35 billion India commitment.
Collectively, Indian officials now speak of a $200 billion pipeline of data‑center and AI infrastructure investment over the coming years, with India marketing itself as a “trusted AI partner” offering low‑cost talent, improving renewable‑energy availability, and a governance model that foregrounds inclusion and development.
Altman’s message
Oppportunities, Risks, And India’s Role
Within this accumulated momentum, Altman’s remarks served three functions: they validated India’s self‑image as an AI leader, they advanced OpenAI’s geopolitical narrative of democratized AI, and they quietly prepared the ground for a deeper infrastructure and enterprise lock‑in.
First, Altman’s speech framed India as a central protagonist in the AI story. He described returning after roughly one year and being “struck” by how fast India had moved, from sporadic pilot projects to real‑world deployments across sectors.
He repeatedly emphasized that India is “not just participating in the AI revolution, but leading it,” and that the country would exercise “a huge amount of influence” over how the technology evolves. Such language resonates with Delhi’s own desire to escape the binary of “rule maker” versus “rule taker” and to present itself as a “rule shaper” for the Global South.
Second, Altman laid out his now‑familiar doctrine on democratization and centralization. He argued that “democratization of AI is the only fair and safe path forward,” insisting that broad access is the best way to ensure human flourishing.
Conversely, he warned that centralizing frontier AI capabilities in a single company or country could lead to profound instability or even “ruin.”
For an Indian audience acutely aware of how foreign platforms came to dominate social media, cloud, and search, this was both reassurance and warning.
Third, Altman gestured toward a near‑term horizon of superintelligence.
On current trajectories, he said, humanity may be only “a couple of years away” from early superintelligent systems, and will likely need superintelligence itself to help design new governance mechanisms, resolve coordination problems, and avoid extreme imbalances in compute access.
That framing places urgency under India’s own governance initiatives.
The MANAV vision and the AI Governance Guidelines are no longer merely domestic experiments; if Altman is correct, they are prototypes for guardrails that may need to scale globally in a compressed time frame.
His comments on jobs were more measured. Altman acknowledged that AI will “definitely impact the job market” but argued that societies have repeatedly found “new things to do,” and that this time would be no different.
For a country where both white‑collar services exports and informal employment are central to social stability, this reassurance is partial at best.
Latest facts and emerging concerns
Superintelligence At The Gate: India, OpenAI, And The Global South
Beneath the celebratory tone of the summit and Altman’s speech, a set of hard constraints and unresolved questions looms.
India is data‑rich but infrastructure‑poor. It is responsible for close to 20% of global data generation but holds only about 3–5% of global data‑center capacity, depending on the metric used.
The current wave of investment aims to close that gap, but data centers are energy‑hungry and water‑intensive. Analysts estimate that India may need an additional 45–50 million square feet of data‑center real estate and 40–45 terawatt hours of incremental power by 2030 to meet AI‑driven demand, straining grids and complicating decarbonization targets.
Water stress is already visible. Large data centers require heavy cooling loads; as hyperscale campuses expand in states like Andhra Pradesh, Maharashtra, and Gujarat, local officials and environmental experts warn that competition for water could intensify, especially during heatwaves.
This raises an awkward question: can India transform itself into an AI compute hub without undermining its own climate commitments and public‑health goals?
The concentration of AI power is another concern. While Altman extols democratization, the reality is that a handful of U.S. and Chinese firms still control the majority of frontier‑grade models, GPUs, and global cloud capacity. U.S. export controls seek to deny advanced chips to rivals, especially China, while channeling high‑end GPUs to “trusted” partners and emerging hubs, including the Middle East and potentially India.
Some analysts speak of a “compute cold war,” in which infrastructure, not only algorithms, becomes the main arena of geopolitical competition.
India’s current strategy aligns it more closely with this U.S.‑led compute bloc. Joint statements speak of “trusted AI corridors,” reciprocal compute access, and large‑scale U.S. hyperscaler investments in Indian infrastructure.
The OpenAI–Tata deal, Google’s $15 billion hub, Microsoft and Amazon’s multi‑billion commitments, and a broader $200 billion data‑center pipeline are all instances of this alignment.
That alignment promises capital and technology, but it also risks reproducing old dependencies in new form.
Domestically, India’s governance model is still in flux.
The decision not to pass a standalone AI law but instead rely on existing statutes plus guidelines might appeal to other developing countries, but it also leaves gaps around liability for model harms, cross‑border data flows, and systemic risk management.
The voluntary nature of the summit’s outcome document—focused on “nonbinding” principles for access to compute and sovereign AI—limits enforceability even as it sets normative baselines.
Cause-And-Effect Analysis
How The AI Summit and Altman’s Talk Reshape AI Geopolitics
Altman’s Delhi intervention, coupled with the summit’s announcements, creates a dense web of causal links across 4 levels: domestic political economy, regional leadership, U.S.–India strategic ties, and the global AI order.
At the domestic level, his endorsement of India as a global AI leader reinforces the government’s political narrative that technology is both a marker and a multiplier of national power.
The OpenAI–Tata partnership fits neatly into a story in which Indian conglomerates move from low‑margin IT services to infrastructure‑scale AI platforms, while Indian workers are “upgraded” through exposure to enterprise AI tools.
The expectation is that such partnerships will increase productivity across sectors—from automotive and retail to finance and healthcare—while spawning new startup ecosystems around localized agents and applications.
However, this reinforcement also narrows policy space. Once India is publicly framed as an indispensable AI hub for OpenAI and fellow U.S. firms, domestic regulators may hesitate to adopt stringent rules on model transparency, compute caps, or export of Indian training data that might be seen as hostile to investment. In effect, public praise can operate as a subtle constraint on regulatory ambition.
Regionally, the Delhi summit crystallizes India’s claim to leadership within the Global South on AI governance.
By hosting what officials describe as a “historic” summit for the Global South, unveiling the MANAV vision, and insisting on democratization of AI for inclusion and empowerment, India positions itself as a normative entrepreneur, not just a technology taker. Altman’s echoing of democratization language further legitimizes this role.
Yet this leadership is contested. Other regions, particularly the Gulf, are building their own claims to AI centrality by fusing energy abundance with vast data‑center projects and sovereign AI initiatives. In this environment, India’s model—rooted in digital public infrastructure and people‑centric narratives—must demonstrate not only ethical appeal but also capacity to mobilize capital and deliver industrial transformation.
Altman’s presence at Bharat Mandapam both strengthens and tests that proposition.
In U.S.–India relations, Altman’s talk and the OpenAI–Tata deal deepen an already dense technology corridor.
Agreements on AI safety benchmarks, reciprocal compute access, and joint semiconductor ventures now sit alongside private arrangements in which U.S. AI firms and cloud providers anchor infrastructure inside India.
U.S. policymakers increasingly see India as a “highly strategic partner” for safeguarding supply chains in AI and semiconductors, while Indian officials frame these ties as a route to climb the value chain without sacrificing strategic autonomy.
Altman’s emphasis on avoiding centralization in “one company or one country” is telling in this context. It implicitly endorses a multi‑node democratic AI network—where firms like OpenAI and hyperscalers like Microsoft or Google distribute compute and models across friendly jurisdictions that nonetheless rely on U.S.‑origin chips, software, and standards.
India, by hosting large OpenAI‑backed data centers and aligning with U.S. governance language, becomes one of those nodes, reinforcing U.S. strategic goals vis‑à‑vis China while also advancing its own ambitions.
At the global level, the summit and Altman’s talk feed into a wider reconfiguration of AI geopolitics. Analysts increasingly describe an AI race structured by 3 pillars: compute (data centers, chips, energy), models (frontier and open‑source systems), and governance (laws, norms, and standards).
The U.S. and China dominate compute and models, but India is emerging as a critical arena for governance experiments and infrastructure build‑out, precisely because it is data‑rich, talent‑rich, and infrastructure‑constrained.
By tying OpenAI’s Stargate infrastructure initiative to Tata’s HyperVault data‑center business, and by announcing “OpenAI for India” in Delhi rather than in a Western capital, Altman effectively acknowledges that the future geography of AI power will not be decided solely in San Francisco, Seattle, Washington, or Beijing.
The Delhi summit thus becomes both symbol and instrument: symbol of a more multipolar AI order, instrument for embedding that order in contracts, kilowatts, and GPUs.
Future Steps
Pathways For India, OpenAI, and the Global South
The choices made in the 3–5 years after the summit will determine whether this moment produces durable benefits or fragile dependencies. Several pathways suggest themselves.
For India, the central challenge is to convert foreign‑anchored infrastructure into domestic capability and strategic leverage.
That means insisting that large data‑center projects incorporate domestic ownership stakes, technology transfer, and commitments to host at least some sovereign models on equal footing with foreign systems. It also means investing heavily in open‑source and locally governed models trained on Indian languages and contexts, so that public services and sensitive sectors are not locked into proprietary stacks. Initiatives like BharatGen Param2 are a start but will require sustained funding, access to GPUs, and robust evaluation ecosystems.
On governance, India will need to move from broad frameworks to enforceable mechanisms. The MANAV principles and AI Governance Guidelines provide a coherent narrative, but issues such as liability for AI‑driven misinformation, cross‑border flows of training data, and oversight of foundation models with systemic risk implications demand more granular rules.
As Altman himself suggested, the advent of early superintelligence will stress‑test legal systems; India’s bet on using existing laws with targeted amendments will only succeed if regulators gain technical capacity and if enforcement is predictable rather than ad hoc.
For OpenAI, Delhi is both opportunity and constraint. The “OpenAI for India” initiative promises a vast user base, a testbed for low‑cost deployment, and a showcase partnership with a globally significant conglomerate. It also forces the company to confront demands for localization, data residency, and co‑governance of models more aggressively than in smaller markets.
Indian politicians and civil‑society actors are already debating whether core models that shape public discourse and state capacity can be left entirely under foreign corporate control.
The company will likely have to explore hybrid governance arrangements: joint steering committees with Indian partners, transparent red‑teaming processes that involve local researchers, and possibly differentiated products or guardrails for sensitive sectors.
Its rhetoric on democratization will be judged by how much access Indian startups and public institutions gain to powerful models and compute, at what price, and with what limits.
For the wider Global South, the Delhi summit and Altman’s talk illustrate both an opening and a warning.
The opening lies in the fact that powerful AI firms and advanced economies now view Global South markets not solely as data mines or consumer bases, but as potential co‑architects of infrastructure and governance. India’s ability to extract billions in AI‑related infrastructure commitments while retaining a development‑first rhetoric suggests that bargaining power is shifting, at least for large emerging economies.
The warning is that AI infrastructure, once laid down, is hard to repatriate or reconfigure. Data‑center campuses, long‑term power purchase agreements, GPU clusters, and software ecosystems create path dependencies.
If these infrastructures are dominated by foreign capital and intellectual property, domestic governments may find themselves constrained in future regulatory choices, much as earlier generations of developing states were constrained by debt and trade agreements.
Conclusion
Sam Altman In Delhi: Can India Democratize Superintelligent Artificial Intelligence
Sam Altman’s talk at the India AI Impact Summit 2026 should not be read as a standalone corporate performance.
It is better understood as one node in a fast‑moving architecture of power: an architecture built from code and chips, but also from treaties, summits, and narratives about who will shape the next operating system of the global order.
By praising India as a leader, warning against centralization, and unveiling “OpenAI for India” in tandem with Tata’s massive data‑center push, Altman helped anchor a vision of India as both laboratory and partner for democratized AI.
The Indian state, for its part, used the same stage to project a human‑centred governance model under the MANAV umbrella, to position itself as the Global South’s voice on AI, and to lock in a flow of capital that could finally bring its data‑center capacity closer to its data and talent reality.
The benefits of this convergence are real. If managed well, it could give India unprecedented leverage in global AI standard‑setting, propel domestic industries up the value chain, and offer developing countries an alternative to both laissez‑faire digital capitalism and control‑heavy techno‑authoritarianism.
It could also, as Altman suggests, make AI a genuine tool of human flourishing rather than a catalyst for new hierarchies.
But it is precisely because the stakes are so high that caution is required. India’s leaders must internalize that data, compute, and models are now strategic assets, comparable to energy and sea lanes in earlier eras.
Governance frameworks like MANAV and the AI Governance Guidelines must be translated into hard law and infrastructure choices that preserve room for maneuver, rather than into slogans that decorate a new era of dependency.
OpenAI, in turn, must recognize that its ambitions to build superintelligence and global infrastructure cannot be pursued as a purely private project; they will be judged against whether they genuinely broaden access and respect emerging powers’ claims to sovereignty.
In the years ahead, Delhi will host many more summits and sign many more AI agreements.
The ultimate measure of the 2026 gathering, and of Altman’s role within it, will be simple: did it help India and the wider Global South secure a voice and a share of value in the AI age, or did it merely re‑inscribe the old pattern in which the periphery supplies data and labour while the core controls the machines that matter?
The answer will depend less on what was said at Bharat Mandapam than on what is built—and who owns it—once the cameras have gone.




