Democracy, Sovereignty, Authenticity: Modi Outlines India’s Civilisational AI Roadmap: Keynote by Prime Minister Narendra Modi at the India AI Impact Summit 2026 - Part III
Executive Summary
From Deepfakes To Global Commons: Modi’s High‑Stakes AI Doctrine Unveiled
Prime Minister Narendra Modi’s keynote address at the India AI Impact Summit 2026 in New Delhi seeks to fix artificial intelligence within a wider struggle over power, trust and development rather than treating it as a purely technical phenomenon.
He presents AI as a “transformative chapter in human history”, comparable in its dual potential to nuclear power, and insists that the central question is no longer what AI might someday be able to do, but what humanity chooses to do with it now.
His speech advances three interlocking ideas: a normative doctrine for AI governance labelled the MANAV vision; a political‑economic project of “democratising” AI for the Global South; and a call to rebuild trust in the digital sphere through authenticity labels and child‑safe AI spaces.
The MANAV framework – Moral and Ethical Systems, Accountable Governance, National Sovereignty, Accessible and Inclusive, Valid and Legitimate – functions as both a domestic and diplomatic template.
Domestically, it is meant to anchor the integration of AI into India’s digital public infrastructure, welfare delivery and security apparatus.
Internationally, it underwrites India’s bid to become a voice for emerging economies in global AI governance debates, from safety standards to data sovereignty.
Modi couples this doctrine with a call for a “global common good” approach to AI, arguing that frontier capabilities and compute must not remain the preserve of a handful of countries and corporations.
At the same time, the address is acutely concerned with legitimacy.
Modi highlights the destabilising effect of deepfakes and synthetic content on democracies and open societies, proposing authenticity labels for online content akin to nutrition labels on food packages.
He calls for internationally accepted watermarking and provenance standards, and emphasises the need for a curated, family‑guided AI environment for children.
These proposals sit alongside a promotional narrative in which India’s diversity, demography and democracy make it a natural “testbed” and hub for AI, and in which global firms are invited to “design and develop in India, deliver to the world, deliver to humanity”.
Viewed analytically, the keynote attempts to align four agendas: the expansion of India’s digital‑development model, the attraction of large‑scale investment in AI infrastructure, the consolidation of political authority over contested information spaces and the projection of India as a normative power in debates over AI risk and opportunity.
The tensions between these goals – especially between democratisation and centralised control, openness and sovereignty, rapid deployment and caution – define the real content of Modi’s intervention.
Introduction
AI As Civilizational Crossroad And Geopolitical Instrument
Modi situates artificial intelligence within a long arc of technological change that includes the harnessing of atomic energy. He draws an explicit analogy: nuclear power gave humanity tools that could either illuminate cities or obliterate them; AI, he suggests, compresses comparable promise and peril into code and silicon.
This framing is not merely rhetorical. It locates AI squarely within the domain of high politics, where questions of war, peace, inequality and sovereignty are central.
By insisting that the “real question is not what AI can do in the future, but what humanity chooses to do with AI in the present”, Modi rejects a deterministic view of technological progress.
AI, in his telling, is an open field of political choice. If it is left “directionless”, it will amplify disruption and destruction; if steered properly, it will become a source of solutions to long‑standing problems in development, welfare and governance.
That dichotomy justifies both an activist state and intensified multilateral engagement on AI governance.This civilisational framing is closely tied to geopolitics.
The India AI Impact Summit 2026, with over 20 heads of state, 60 ministers and 500 global AI leaders gathered in New Delhi, is presented as a moment when the Global South claims a seat at the table in shaping AI’s rules.
Modi emphasises that it is “a matter of pride for India and the Global South” that such a summit is hosted in New Delhi, and that India sees in AI not only a set of tools but “the blueprint of tomorrow”.
AI thus becomes an arena in which India can escape the periphery and act as a convening middle power between established technological hubs.
History and Current Status
From Digital Public Goods To The MANAV Doctrine
Modi’s keynote can only be understood against the backdrop of India’s digital transformation.
Over the past decade, the Indian state and allied private actors have constructed a layered digital infrastructure: Aadhar for biometric identity, UPI for instant payments, DigiLocker for documents and Ayushman Bharat for health records.
This architecture, known as the India Stack, has underpinned an explosion in digital transactions, enabling more than 20 billion UPI payments per month and the distribution of welfare benefits directly to bank accounts.
This digital public goods model has been central to India’s development and foreign policy story. It allows the government to claim that it delivers services to hundreds of millions at scale and low cost, and it provides a template that India exports to partner countries in Africa and Asia.
By 2024, this story had been extended into the AI domain through the IndiaAI Mission, which aims to build national compute infrastructure, sovereign models, open datasets, skilling programmes and startup finance.
Modi’s MANAV doctrine emerges at this juncture as a governance overlay for a rapidly expanding AI layer atop the digital stack.
MANAV stands for Moral and Ethical Systems, Accountable Governance, National Sovereignty, Accessible and Inclusive, Valid and Legitimate.
Each element speaks to a particular pressure point. Moral and ethical systems respond to growing unease over opaque, biased or dehumanising algorithmic decisions in welfare, policing and finance.
Accountable governance seeks to preserve some line of responsibility when automated systems are introduced into public administration. National sovereignty addresses concerns that critical datasets and compute might be captured by foreign firms or subject to extraterritorial regulation.
Accessible and inclusive reflects the development imperative to extend AI benefits beyond urban elites. Valid and legitimate stress scientific rigour and some degree of democratic consent for AI deployments.
In current practice, these principles intersect with a rapidly growing ecosystem.
India is building what Modi calls a “resilient ecosystem ranging from semiconductors and chip‑making to quantum computing”.
Secure data centres, a strong IT backbone and a dynamic startup scene, he argues, position India as a natural hub for “affordable, scalable and secure AI solutions”.
The state is adding more GPUs to support innovators, providing computing power for startups at subsidised rates and pooling thousands of datasets and hundreds of AI models as national resources under an AI fund.
The keynote also reflects India’s self‑conception as a testbed.
Modi’s repeated claim that any AI model that “succeeds in India can be deployed globally” rests on the idea that India’s diversity, demography and democracy form a stress‑test for AI. If systems can function across dozens of languages, uneven connectivity, complex bureaucracies and pluralistic politics, they are presumed robust elsewhere.
This is both an empirical bet and a diplomatic narrative designed to attract frontier firms and investment.
Key Developments Articulated In The Keynote
Modi’s address weaves together several concrete policy stances and proposals.
The most visible are the MANAV framework, the call for an “open sky” for AI under human command, the democratization of AI as a global common good, the authenticity‑label proposal and the emphasis on child‑safe AI spaces.
The first development is the MANAV vision as an explicit doctrinal anchor.
Modi presents it as a governance blueprint, suggesting that the five elements will shape how India designs regulation, procurement and diplomatic positions on AI. It allows India to signal alignment with global concerns over ethics and safety while insisting on sovereignty and inclusion.
In diplomatic terms, MANAV functions as conceptual branding, positioning India as a country with its own AI governance vocabulary rather than merely adopting Western or Chinese framings.
The second is the “open sky, firm reins” metaphor. Modi insists that AI must be given an open sky – space for experimentation and innovation – but that “the reins must remain in our hands”.
This double image captures the tension between promoting rapid AI adoption and maintaining political control. It reassures innovators that India will not smother AI under premature bans while signalling to domestic and foreign audiences that the state will not relinquish ultimate authority.
The third development is the democratisation agenda. Modi calls for AI to become “a tool for inclusion and empowerment, particularly for the Global South” and describes India’s goal as the “democratisation of AI”.
He argues that AI will benefit the world only when it is shared, and emphasises open code and shared development so that “millions of young minds” can improve AI and make it safer.
This language dovetails with India’s existing advocacy of open, interoperable digital public goods and positions India against a world in which frontier models and compute are monopolised.
The fourth is the proposal for authenticity labels.
Confronted with a surge in deepfakes and synthetic media, Modi draws an analogy with the physical world: just as food carries nutrition labels, digital content should carry authenticity labels.
He calls for watermarking standards and “built‑in trust mechanisms from the beginning of AI development”, recognising that credibility will determine whether AI systems are socially sustainable.
In the keynote and subsequent coverage, this idea is tied to the fear that unmarked deepfakes could destabilise elections, incite violence or destroy reputations.
A fifth element is a focus on child safety. Modi argues that “just as the school syllabus is curated, the AI space must also be child‑safe and family‑guided”.
This positions AI alongside social‑media and content‑recommendation platforms as an environment that must be actively shaped to protect minors, rather than left to commercial logic.
It resonates with parallel initiatives in Europe and some Gulf states to shield children from certain kinds of digital exposure.
Finally, Modi uses the keynote to claim early successes and signal momentum. He notes that three Indian companies have launched AI models and apps at the summit itself, and that these demonstrate the talent of India’s youth and the depth of local solutions.
He frames India as the “centre of the world’s largest tech pool” and expresses confidence that “aspirational India” will play a major role in the global AI journey.
Latest Facts and Concerns
Highlighted or Implied In The Speech
The keynote is delivered against a background of rapid AI deployment and rising unease. Modi’s insistence on deepfake threats and authenticity labels reflects a reality in which generative AI tools can cheaply fabricate convincing video, audio and text, and in which such content has already begun to circulate in Indian political discourse.
The concern is not only about misinformation but about a more profound erosion of trust: if any image or recording can plausibly be dismissed as a fake, accountability itself becomes harder.
Modi’s emphasis on democratisation and open code comes at a time when frontier AI capabilities and compute are heavily concentrated in a small number of US‑based and Chinese firms, with European and some Gulf efforts trying to catch up.
Export controls on advanced chips, and the financial and energy demands of training large models, raise the risk that most countries will become dependent users rather than co‑creators.
By foregrounding shared development and AI as a global common good, the keynote gestures towards these structural inequalities.
At the same time, there are concerns that democratization rhetoric may obscure domestic risks.
Civil‑society groups and some international observers warn that AI‑augmented surveillance, predictive policing and welfare‑eligibility systems can entrench existing biases and expand discretionary power in the hands of the state.
Modi’s strong endorsement of AI for governance and his celebration of India’s “resilient ecosystem” sit uneasily with these warnings. The MANAV principles and authenticity labels offer partial reassurance, but they do not in themselves resolve questions about transparency, redress and participation.
Another implicit concern is labour and distributional impact.
The keynote itself focuses more on opportunity than on disruption, but the summit’s broader context includes discussions on jobs, skills and economic transformation, and commitments to gather anonymised data on real‑world AI usage for evidence‑based policy.
The underlying issue is whether AI’s productivity gains will be widely shared or whether they will primarily accrue to those who own capital and infrastructure.
Cause‑and‑Effect Analysis
Open Sky, Firm Reins: Inside Modi’s India AI Summit Keynote
Aligning Development,Control And Status
Modi’s keynote can be read as an attempt to align several cause‑and‑effect chains that might otherwise diverge. At the core lies the economic logic of AI: high fixed costs of compute and data curation paired with low marginal costs of deployment.
This structure favour’s scale and tends toward concentration. Left alone, it would likely produce a world in which a few actors control the most powerful systems.
India’s digital public goods history provides a counterexample in the payments and identity domains, where open, interoperable state‑anchored platforms have created competition and wide access.
Modi’s speech seeks to replicate this pattern in AI by framing it as a global common good, advocating open code and shared development, and building national infrastructure that others can plug into.
The cause is recognition of a structurally unequal AI landscape; the effect sought is a diffusion of AI capabilities through public‑spirited infrastructure and norms.
A second causal chain involves political authority and information integrity.
The spread of deepfakes and synthetic content threatens to fragment public spheres, making it harder for citizens to agree on basic facts. If unchecked, this could undermine trust in elections, institutions and media.
Modi’s authenticity‑label proposal and his call for global standards are responses to this threat.
The cause is AI‑driven manipulation and epistemic instability; the hoped‑for effect is a reconstitution of trust through technical markers and legal frameworks.
A third chain centers on sovereignty. The more essential AI becomes to welfare delivery, security and growth, the more dangerous dependence on foreign infrastructure and models appears.
Modi’s emphasis on national sovereignty within MANAV, his references to semiconductors, quantum computing and secure data centres, and his invitation to “design and develop in India” are all attempts to ensure that AI’s core levers remain at least partially under Indian jurisdiction.
The cause is fear of external chokepoints and extraterritorial control; the effect sought is strategic autonomy in AI.
A fourth set of linkages concerns international status. Hosting a summit that “brings together the who’s who of the AI world” allows India to claim convening power and to shape discourse. Modi’s keynote, with its MANAV branding and global common‑good rhetoric, aims to convert that convening power into normative influence.
The cause is India’s ambition to be recognised as a rule‑maker, not just a rule‑taker; the effect sought is a seat at the table whenever future AI rules, safety frameworks or trade arrangements are negotiated.
These chains are not automatically compatible. Democratisation can conflict with sovereignty when open models trained on global data become necessary for competitiveness. Authenticity labels and child‑safe curation can shade into censorship or paternalism if not carefully constrained.
Open skies for AI innovators can clash with precautionary pauses advocated by some safety researchers.
Modi’s keynote balances these tensions by asserting that human command will remain in charge, embedding openness within a sovereign frame and tying control measures to the protection of democracy and children.
Yet the underlying contradictions remain to be worked out in law, regulation, and practice.
Future Steps Implied by the Keynote
Modi’s MANAV Vision Recasts Global AI As Human‑Centric Strategic Project
Several future directions flow logically from Modi’s address and from the institutional decisions announced around it.
One is the consolidation of AI governance structures that operationalise MANAV. This will require translating each principle into legal standards, technical guidelines and oversight mechanisms: ethics reviews for public‑sector AI deployments, clear accountability for automated decisions, data‑localisation rules consistent with sovereignty, access‑targets and inclusion metrics, and validation protocols to ensure models meet context‑appropriate benchmarks.
A second direction is the development of technical and legal regimes for authenticity labels. Watermarking schemes, cryptographic provenance tools and platform‑level labelling standards will have to be designed, tested and coordinated internationally.
India is likely to push in multilateral forums for such standards, leveraging the summit’s momentum.
Domestically, it will need to decide how mandatory labels should be, which categories of content they cover and how to enforce them without chilling legitimate expression.
A third step is the scaling of AI infrastructure and open resources. Modi’s references to GPUs for innovators, national AI funds and shared datasets indicate an intention to build public‑anchored compute and data infrastructure.
In practice, this means long‑term investment in data centres, power grids and connectivity, coupled with governance mechanisms that keep such infrastructure from becoming monopolised. It also implies expanding programmes that provide start‑ups and researchers with affordable access to compute.
A fourth direction concerns social protection and skills. While Modi’s speech foregrounds opportunity, the summit’s broader commitments to evidence‑based policy on jobs and skills point toward the need for curriculum reforms, large‑scale reskilling and new safety nets.
Future steps will likely include integrating AI literacy into school and vocational curricula, creating modular training for workers in disrupted sectors and experimenting with new forms of income or employment support as automation spreads.
A fifth trajectory involves multilateral diplomacy. Modi’s invocation of AI as a global common good and his hosting of a Global South‑inclusive summit set the stage for India to champion initiatives at the UN, G20 and other forums: shared compute facilities for low‑income countries, open multilingual datasets, cooperative safety research and incident‑response mechanisms.
Whether such initiatives gain traction will depend on India’s ability to bridge between Western, Chinese, Gulf and Global South positions.
Finally, there is the domestic political dimension. Implementing MANAV and authenticity labels will demand new regulatory capacities, independent authorities and judicial engagement.
Tensions between security, efficiency and rights will intensify as AI is deployed in policing, welfare targeting and content moderation. How India balances these pressures will either confirm or undermine the keynote’s claim that AI in India will be human‑centric and democratic.
Conclusion
Prime Minister Narendra Modi’s keynote at the India AI Impact Summit 2026 is an ambitious attempt to place India at the centre of global debates on artificial intelligence while consolidating a domestic project of AI‑enabled development and state capacity.
By likening AI to nuclear power, he elevates it to the level of civilisational choice; by advancing the MANAV doctrine, he offers a governance vocabulary that promises ethics, accountability, sovereignty, inclusion and legitimacy.
His calls for democratized AI as a global common good, for authenticity labels to counter deepfakes and for child‑safe AI spaces respond to concrete anxieties over inequality, manipulation and social harm.
Yet the speech is also a balancing act. It seeks open skies for innovation while keeping the reins of control firmly in human – and in practice, governmental – hands. It advocates open code and shared development but embeds them within a sovereignty‑first frame.
It celebrates India as both a laboratory and a hub for frontier AI, even as frontier capabilities and compute remain concentrated elsewhere. The success of Modi’s vision will depend on whether India can institutionalise MANAV in ways that genuinely expand agency for citizens and other states, rather than simply legitimising more pervasive and opaque algorithmic power.
If India manages to turn this keynote into a sustained programme – with robust governance, equitable infrastructure, meaningful international cooperation and credible protections for rights – the 2026 summit may be remembered as a watershed in the construction of a more plural AI order.
If, however, the gap between rhetoric and practice widens, the speech will stand as a vivid illustration of how the language of democratisation and common goods can coexist with, and sometimes conceal, deepening concentrations of digital power.




