Categories

Weaponizing Illusion: Israel(US)-Iran's War Deepfake Campaign and the Global Crisis of Information Integrity in Conflict Zones

Weaponizing Illusion: Israel(US)-Iran's War Deepfake Campaign and the Global Crisis of Information Integrity in Conflict Zones

Executive Summary

Synthetic Warfare: Deepfakes, Disinformation, and the New Information Landscape of Armed Conflict

The ongoing U.S.-Israeli military campaign against Iran, which began in early March 2026, has inaugurated a new and deeply consequential era in the history of conflict communication.

Alongside the physical exchange of missiles, drones, and airstrikes, a parallel war is being waged in the digital domain — one where artificial intelligence (AI) generates synthetic images, videos, and audio that blur the boundaries between fact and fabrication.

The research firm Cyabra*, has documented a pro-Iran disinformation campaign that generated over 145 million views and more than nine million interactions across social media platforms in a matter of days.

The campaign deployed tens of thousands of fake accounts synchronized to disseminate AI-generated deepfakes portraying Iran as victorious, its adversaries as weakened, and its cause as legitimate.

The consequences of this synthetic media offensive extend far beyond immediate military optics.

They strike at the epistemological foundations of democratic discourse, challenge the credibility of legitimate journalism, and expose the catastrophic failure of existing governance frameworks to contain the weaponization of AI.

FAF article undertakes a comprehensive scholarly analysis of deepfake warfare, tracing its historical antecedents in war propaganda, examining its current operational form in the Iran conflict, analyzing its key mechanisms and consequences, and assessing the prospects for effective governmental, technological, and institutional response.

Origin of Cyabra?

Cyabra (Cyabra Strategy Ltd. / Cyabra Tech) is an Israeli company, headquartered in Tel Aviv, Israel.

It was founded in 2018 (some sources say 2017) by veterans of Israeli intelligence units.

The main operations and most employees (around 67) are in Israel, though it has a smaller office/presence in New York, USA.

Cautionary Note on the Cyabra Report

Readers are strongly advised to approach the Cyabra report with caution.

Cyabra functions as a pro-Israeli propaganda apparatus that specializes in deepfake technology.

FAF has no affiliation whatsoever with Cyabra and does not endorse, support, or validate its reports or findings in any manner.

Introduction: The Digital Battlefield and the Crisis of Truth

When Pixels Become Propaganda: How Deepfakes Are Rewriting the Rules of Modern Warfare in Real Time

When the U.S. and Israeli forces launched coordinated strikes against Iran's nuclear infrastructure, military assets, and leadership in early March 2026, the kinetic war was accompanied almost immediately by an information war of unprecedented scale and technological sophistication.

Deepfakes — AI-generated videos, images, and audio engineered to appear authentic — flooded social media platforms including X (formerly Twitter), Facebook, TikTok, and Instagram within the first days of the conflict.

The synthetic content ranged from spectacular fabrications — massive explosions in Tel Aviv, precision missile strikes on U.S. naval vessels in the Persian Gulf, and ecstatic Iranian crowds celebrating military victories — to subtler, emotionally manipulative pieces, such as footage of children playing moments before a real but already distorted American airstrike on the Shajarah Tayyebeh elementary school that killed at least 175 people, most of them children.

The video was fake, but the tragedy it referenced was real — a sinister conjunction that illustrates the deepest danger of deepfake warfare: not merely inventing events, but distorting and colonizing real ones.

What distinguishes this conflict from prior information environments is not the novelty of war propaganda as a concept, but the industrial scale, technical realism, and near-zero production cost with which AI has endowed state and non-state disinformation stakeholders.

BBC Verify correspondent Shayan Sardarizadeh described this conflict as "the first instance of a significant global confrontation where we observed more misinformation created through AI than through traditional methods," marking "a new epoch in the utilization of AI-generated content."

The implications are seismic. Cognition itself has become the landscape of contest.

Deepfakes are being used as precision weapons to mold perceptions, obscure verifiable facts, and produce what scholars call "epistemic ambiguity" — a condition in which audiences are rendered incapable of distinguishing reality from fabrication with any reliable confidence.

History and Current Status: From Ancient Deception to the AI Age

The Invisible Front: AI-Generated Disinformation and the Collapse of Epistemic Security in the Iran War

The use of deception and disinformation in warfare is as old as organized conflict itself.

From the Trojan Horse to the fabricated Zinoviev letter in 1924, from British naval deceptions in World War II to the KGB's Operation INFEKTION — which falsely planted the story that the United States had engineered the AIDS virus as a biological weapon — governments have always understood that controlling the narrative of war is as important as controlling territory.

The 20th century industrialized propaganda on an unprecedented scale. World War I saw the British government feed fabricated stories of German atrocities to international wire services, including the American press, with deliberately calculated intent.

Nazi Germany's Joseph Goebbels elevated propaganda to a science of mass psychological manipulation, while the U.S. Office of War Information worked in parallel to construct a heroic, morally legible narrative of the Allied cause for domestic and international consumption.

The Cold War transformed disinformation into a strategic tool of statecraft.

Soviet-sponsored influence campaigns — forerunners of today's computational propaganda — planted false stories in Third World newspapers, funded front organizations, and seeded ideological narratives designed to fracture Western alliances.

What has changed in the digital age, and most acutely in the AI era, is not the intent behind disinformation but the infrastructure enabling it.

The internet democratized content production; social media eliminated editorial gatekeepers; and now generative AI has removed even the technical skill barrier from the creation of highly convincing fabricated media.

In 2023, an estimated 500,000 deepfakes were shared online. By 2025, that figure had reached 8,000,000 — a 1,500% increase in 2 years.

The Iran-U.S.-Israel war of 2026 represents the moment these technological and geopolitical trajectories converged with maximal force. In earlier conflicts — Russia's war in Ukraine, the Israel-Hamas war of 2023-2024 — deepfakes and recycled footage had already begun to feature prominently.

The Institute for Strategic Dialogue found that in the earlier Israel-Hamas conflict, 34 deepfake posts alone garnered 37 million views.

But analysts widely agree that the 2026 Iran conflict has crossed a qualitative threshold.

AI-generated content now constitutes a larger share of the disinformation ecosystem than content produced through traditional manipulation methods.

The technological capability to produce photorealistic, cinematically convincing synthetic video has moved from the realm of Hollywood studios to the mobile devices of influence operation personnel — many of them, in the case of Iran, operating as part of a centralized, state-directed campaign.

Key Developments: The Architecture of Iran's Disinformation Operation

Truth as the First Casualty: Deepfakes, Influence Operations, and the Urgent Need for a Global Response Framework

The investigation conducted by Cyabra**(Read above), published and reported internationally on March 18, 2026, constitutes the most detailed forensic portrait of Iran's current disinformation apparatus.

The report documented that Iran deployed tens of thousands of fake social media accounts to disseminate AI-generated videos and images, achieving over 145 million views and over 9 million interactions within a compressed timeframe.

The fake profiles displayed unmistakable signs of coordinated inauthentic behavior, including centralized keyword distribution, synchronized posting schedules, and repetitive content sharing patterns.

The campaign's singular strategic objective was to portray Iran as militarily dominant, its enemies as demoralized and defeated, and the U.S.-Israeli operation as an act of criminal aggression — rather than a targeted campaign against nuclear and military infrastructure.

The specific content of the deepfakes was calibrated for multiple audiences. For domestic Iranian consumption, the videos provided reassurance that the Islamic Republic was striking back effectively — critical for regime legitimacy at a moment of potential vulnerability.

For international audiences, particularly in the Global South and among Muslim-majority populations, the fabrications were designed to delegitimize the U.S. and Israeli operations and generate sympathy for Iran.

Some of the most sophisticated pieces involved AI-generated footage of supposed missile strikes on U.S. warships in the Persian Gulf, several carrying the watermarks of generative AI platforms — in at least one documented case, Google's Veo platform — because content creators had apparently forgotten to remove these embedded markers of inauthenticity.

Other synthetic videos showed protests in Tehran purportedly supporting Israel, falsely suggesting widespread popular dissent against the Iranian government — content circulated by pro-Israel stakeholders as counter-propaganda in the same information landscape.

Among the most troubling developments in this information landscape has been the deployment of deepfakes targeting third-country political leaders.

Deepfake videos circulated on social media purported to show Indian Prime Minister Narendra Modi declaring Iran a "terrorist regime" and pledging solidarity with Israel, while threatening Pakistan from the podium of a fictitious summit.

The account responsible was subsequently withheld in India following legal action, but the content had already spread internationally.

The Israeli government's own daily intelligence updates from March 15th, 2026 documented emerging false narratives including claims that Prime Minister Netanyahu had been killed and was "appearing via AI deepfakes," allegations of a downed U.S. refueling aircraft, and fabricated casualty figures — each designed to generate confusion and undermine the perceived strategic credibility of the U.S.-Israeli coalition.

Latest Facts and Concerns: Scale, Realism, and the Epistemological Dimension

From the Battlefield to Your Feed: How Synthetic Media Is Now the Most Dangerous Weapon in Geopolitical Conflicts

What has most alarmed researchers, intelligence analysts, and communication scholars is not merely the quantity of deepfakes being produced but the quality.

AI researchers, commenting on the Iran war's information environment, stated unequivocally that "we have reached a level of realism in video, audio, and image deepfakes that for most people, it is not discernible from fact."

The release of platforms like OpenAI's Sora 2, with its dramatically enhanced video realism, has materially accelerated this trajectory.

This is not a marginal upgrade in deceptive capability; it represents a fundamental shift in the epistemological conditions under which audiences consume and evaluate media.

The concept of the "liar's dividend" — coined by legal scholars Bobby Chesney and Danielle Citron — is now being tested at operational scale.

The liar's dividend describes a condition in which the mere existence of deepfake technology enables the dismissal of authentic footage as fabricated.

In the current conflict, Iranian officials have already deployed this strategy, dismissing verified evidence of destroyed nuclear facilities and military installations as AI-generated Western propaganda.

This cognitive corruption operates in two directions simultaneously: fabrications are made to appear real, and real events are made to appear fabricated.

The cumulative result is an information environment so saturated with uncertainty that meaningful public discourse about the legitimacy, conduct, and consequences of military action becomes extraordinarily difficult to sustain.

Social media platforms have demonstrated a deeply inadequate response to this crisis.

Despite longstanding commitments to remove coordinated inauthentic behavior, X (formerly Twitter), TikTok, and Facebook have all struggled to contain the torrent of synthetic content.

The European Commission fined X €120 million in late 2025 for breaching Digital Services Act transparency rules — a signal that regulatory patience is exhausted but enforcement mechanisms remain insufficient for real-time crisis management.

TikTok's algorithmic architecture, which rewards emotionally engaging and visually spectacular content regardless of veracity, has made it a particularly fertile landscape for deepfake propagation.

The fundamental tension between engagement-maximizing platform design and the imperatives of epistemological health has never been more starkly exposed.

Cause-and-Effect Analysis: How Deepfakes Shape the Strategic Landscape

One Hundred Forty-Five Million Views of Nothing: The Anatomy of Iran's AI Disinformation Machine in 2026

The causal chain linking AI-generated disinformation to strategic outcomes in the Iran conflict operates across several interconnected dimensions.

At the most immediate level, the saturation of social media with deepfakes showing Iranian military successes has a measurable effect on popular opinion in Muslim-majority countries, amplifying anti-American and anti-Israeli sentiment, complicating diplomatic management of regional alliances, and emboldening pro-Iranian political stakeholders from Lebanon to Pakistan.

The 145 million views generated by the documented Cyabra campaign represent an audience larger than the population of most medium-sized nations — and this represents only the portion of the campaign that Cyabra was able to forensically document.

The true scale of synthetic content circulating about the conflict almost certainly far exceeds these figures.

At the operational military level, deepfakes create what analysts describe as "escalation management hazards."

When political leaders and military commanders are operating in an environment in which false reports of sunk warships, destroyed bases, and senior leadership deaths circulate with near-instantaneous reach and cinematic plausibility, the probability of miscalculation increases substantially.

A military commander or political decision-maker who acts on fabricated intelligence — or who is pressured to respond to public outrage generated by a fabricated atrocity — can inadvertently escalate a conflict in ways that no rational security analysis would sanction.

The 2026 Iran conflict has illustrated this hazard in real time: fabricated claims of U.S. attacks on civilian infrastructure circulated on TikTok within hours of real airstrikes, generating street protests in at least three countries before fact-checkers could even begin to assess their authenticity.

At the deepest structural level, the cumulative effect of conflict-related deepfakes erodes the epistemic foundations of democratic accountability.

Democratic societies depend on the ability of their citizens to form accurate judgments about the actions of their governments in war.

If the information environment is systematically polluted by synthetic media — if citizens cannot distinguish real atrocities from fabricated ones, genuine military setbacks from invented victories — the public's ability to exercise meaningful oversight of military and foreign policy collapses.

This represents an existential threat not merely to the immediate management of the Iran conflict but to the long-term legitimacy of democratic institutions in societies that are increasingly shaped by algorithmically curated, synthetic information ecosystems.

The effects are not limited to the populations directly targeted by Iran's campaign.

Research on Russia's use of AI in disinformation operations — including deepfake campaigns aimed at undermining EU institutions, fabricating Ukrainian government corruption, and shaping electoral outcomes in Western democracies — demonstrates that the deepfake-as-geopolitical-weapon model is being adopted and refined by multiple authoritarian states simultaneously.

This is not an Iranian problem, nor a Middle Eastern problem; it is a structural feature of the contemporary information landscape that any state with AI tools, a social media presence, and a strategic grievance can be exploited.

Future Steps: Building a Coordinated Response Architecture

The Liar's Dividend: How AI-Generated Fakes Are Reshaping Public Opinion and Undermining Democratic Accountability in the Age of War

The inadequacy of existing responses to the deepfake crisis is by now beyond serious dispute.

What is required is a fundamentally different architecture of governance, technology, and international coordination — one that operates at the speed of the threat rather than the pace of conventional regulatory processes.

At the technological level, the most promising immediate development is the emergence of provenance and watermarking standards.

The Coalition for Content Provenance and Authenticity (C2PA), backed by Adobe, Microsoft, Google, and other major technology companies, has developed technical standards for embedding cryptographic metadata into digital content at the point of creation — allowing platforms and end-users to verify whether a piece of media has been AI-generated or digitally manipulated.

For these standards to become effective at scale, their adoption must be mandated rather than voluntary.

Social media platforms must be legally required to display provenance data prominently, and AI content generation tools must be required to embed it automatically.

The UK government's February 2026 initiative to develop a world-first deepfake detection evaluation framework, in collaboration with Microsoft and other major technology companies, represents a significant institutional step.

The framework will test leading detection technologies against real-world threats and establish industry-wide detection standards. India's amendment to its Information Technology Rules in February 2026, mandating the removal of deepfake content within three hours of government or court order and requiring all AI-generated content to be clearly labeled and carry digital identifiers, represents a model for swift national-level regulatory action.

In the United States, the TAKE IT DOWN Act of May 2025 — the first federal law directly restricting harmful deepfakes — established criminal penalties for the non-consensual distribution of synthetic intimate images, though its scope remains narrow relative to the national security dimensions of deepfake warfare.

Legislators are currently considering more expansive measures including the Protect Elections from Deceptive AI Act and the NO FAKES Act, each of which would extend federal jurisdiction over the malicious deployment of synthetic media.

At the international level, what is urgently needed but does not yet exist is a multilateral treaty framework governing the use of AI-generated disinformation as an instrument of state policy.

The deployment of deepfakes by a state to shape the information environment of another state's population during armed conflict constitutes, at minimum, an act of information aggression and, arguably, a violation of the principle of non-intervention in the internal affairs of sovereign states.

Existing international law — developed in an era of analog conflict and broadcast media — provides no adequate framework for adjudicating such acts, attributing state responsibility, or authorizing proportionate responses.

The development of such a framework, analogous to existing treaties governing the weaponization of chemical or biological materials, must become a diplomatic priority for the international community.

Platform companies bear their own substantial share of responsibility, and their obligations extend beyond compliance with national regulatory frameworks.

The algorithmic architectures that govern content ranking on TikTok, X, and other platforms are structured to maximize engagement — and engagement, as decades of research has demonstrated, is maximized by emotionally arousing, identity-confirming, and visually spectacular content.

AI-generated war deepfakes fulfill all three of these criteria with unprecedented efficiency.

Platforms must accept that their algorithmic designs are not neutral technical choices; they are political decisions with geopolitical consequences, and they must be restructured accordingly, through the mandatory deprioritization of unverified content during declared conflict periods, the development of real-time AI detection layers, and the deployment of rapid-response fact-checking partnerships with established journalistic organizations.

Meta's Oversight Board has already called for an urgent review of the company's content moderation policies to address AI-generated deepfakes during armed conflicts. This review must yield concrete structural reforms, not further consultation.

The media literacy dimension of the response is equally critical and equally neglected.

A population equipped with the conceptual tools to critically evaluate digital media — to ask instinctively where a video originated, who benefits from its circulation, what technical markers might indicate synthetic generation — is fundamentally more resistant to deepfake manipulation than one that receives and processes digital content as a passive consumer.

Governments, educational institutions, and civil society organizations must invest at scale in digital media literacy programs, with particular attention to demographic groups — adolescents, populations with low media literacy baselines, communities with high social media dependence — that research consistently identifies as most vulnerable to synthetic media manipulation.

The Deeper Stakes: Epistemic Security as a National Security Imperative

The analytical literature on information security has traditionally focused on the confidentiality, integrity, and availability of data — the classic triad of cybersecurity.

The deepfake crisis requires the introduction of a 4th dimension: epistemic security, defined as the ability of individuals and societies to form accurate beliefs about the world on the basis of reliable information.

This is not merely a philosophical abstraction.

A society whose epistemic security has been degraded — whose members cannot agree on basic facts about ongoing military operations, government decisions, or international events — is a society whose capacity for democratic self-governance has been materially compromised.

In this sense, deepfake warfare is not merely an adjunct to physical warfare; it is an assault on the constitutional and political order of targeted societies, conducted through the manipulation of their shared informational commons.

The Iran conflict of 2026 has provided a sobering demonstration of how rapidly and effectively a state actor can exploit generative AI to degrade the epistemic security of adversaries, third-country populations, and its own citizens simultaneously.

The Cyabra report's finding that Iranian-linked deepfakes and disinformation generated 145 million views in a brief period should be read not as a discrete intelligence data point but as a signal of systemic vulnerability — a vulnerability that will only deepen as AI generation tools become more accessible, more realistic, and more deeply integrated into the everyday media consumption habits of global populations.

The strategic logic of deepfake warfare from Iran's perspective is entirely coherent.

A state facing the overwhelming conventional military superiority of the United States and Israel has every rational incentive to compete in the information domain, where the cost of production is negligible, the potential for strategic impact is enormous, and attribution and accountability remain, for now, practically and legally ambiguous.

This asymmetric logic — the use of information operations to compensate for conventional military disadvantage — is not unique to Iran; it has been the operating model of Russian, Chinese, and North Korean information operations for decades.

What is new, with generative AI, is the scale, speed, and plausibility with which it can now be executed by any state actor, and increasingly, by well-resourced non-state stakeholders as well.

Conclusion: The Moment Demands Institutional Courage

Governing the Ungovernable: Why Governments and Tech Platforms Are Losing the War Against Deepfakes

The deepfake crisis surrounding the Iran war of 2026 is not primarily a technological problem. Technology is, of course, central to both the challenge and the response.

But the core failure is institutional: the failure of governments to establish enforceable international norms against the weaponization of synthetic media; the failure of platform companies to restructure algorithmic incentives that systematically reward the most engaging and therefore often the most dangerous content; the failure of educational systems to equip citizens with the critical tools required to navigate a synthetic information environment; and the failure of the global community to recognize epistemic security as a dimension of national security that demands the same level of institutional investment and political will that governments apply to physical and cybersecurity.

The deepfakes now circulating about the Iran war will shape the perceptions of tens of millions of people who will never read a fact-check, never encounter a credible correction, and never have reason to question the emotional certainty that a skillfully constructed synthetic video can produce.

That is the measure of the challenge.

Meeting it requires not incremental improvements to existing content moderation systems but a fundamental reimagining of the governance structures, technological standards, platform designs, and educational systems through which democratic societies manage their shared relationship with information.

The cost of failure is not merely a degraded information environment. It is the systematic dismantling, one synthetic pixel at a time, of the epistemic commons on which democratic self-governance depends.

Iran’s asymmetric maritime strategy in the Strait of Hormuz and Kharg Island

Iran’s asymmetric maritime strategy in the Strait of Hormuz and Kharg Island

Fake Wars: How AI-Made Videos Are Being Used to Fool the World

Fake Wars: How AI-Made Videos Are Being Used to Fool the World