The escalation of direct hostilities between Iran, Israel, and the United States since Operation Epic Fury began has turned the Middle East into a dual battlefield of kinetic strikes and relentless information warfare. What began with joint U.S.-Israeli assaults on Iranian targets on February 28 has quickly become known as the first major AI-driven conflict, where disinformation and misinformation flood social media at an unprecedented scale. All major players have weaponized false or misleading content to shape narratives, boost domestic morale, demoralize opponents, and sway global opinion. This article analyses how AI has been used by warring parties to shape narratives.
The conflict intensified rapidly after the initial strikes, which targeted Iranian military sites, nuclear facilities, and leadership, including the reported killing of Supreme Leader Ali Khamenei. Iran responded with barrages of missiles and drones aimed at Israel and U.S. assets in the region. Yet alongside these physical exchanges, a sophisticated information campaign has unfolded online. Iranian state media and aligned networks released at least 18 documented false claims of battlefield victories in the early weeks, often supported by AI-generated deepfakes or recycled footage. These included fabricated videos showing missiles striking Tel Aviv, the USS Abraham Lincoln aircraft carrier ablaze or sinking, and U.S. troops captured and paraded. Such content amassed hundreds of millions of views across platforms like X, TikTok, and Instagram, with some pro-Iranian accounts generating over a billion impressions collectively.
Furthermore, Iran has leveraged cheap and accessible AI tools to produce realistic deepfakes and manipulated images. Examples range from cartoonish Lego-style animations mocking U.S. President Donald Trump and Israeli leaders to videos depicting widespread destruction in Israeli cities or celebrations in Tehran. Russia and China amplified these narratives through state media and bot networks, exploiting global anti-war sentiments and deflecting attention from Iran’s reported losses. Domestically, Iranian authorities have imposed near-total internet blackouts, reducing connectivity to minimal levels, and arrested individuals for sharing unapproved information, thereby filling the vacuum with regime-controlled propaganda. This asymmetrical approach aims to project strength despite conventional military disadvantages and to erode support for the U.S.-Israeli campaign among Western audiences.
In parallel, Israel has employed stricter information controls to manage the narrative and protect operational security. The IDF Military Censor has restricted live broadcasts of city skylines during attacks, prohibited detailed reporting on strike locations, and limited journalist access in sensitive areas. Pro-Israel accounts have occasionally recirculated older footage or shared content suggesting heightened Iranian internal dissent, sometimes blurring into selective framing. These measures have drawn criticism for reducing transparency, yet they have focused primarily on countering exaggerated Iranian claims and highlighting the defensive nature of operations rather than mass-producing fabrications.
Meanwhile, the United States has concentrated on rebutting adversary disinformation while facing its own domestic challenges. President Trump has publicly labelled Iranian AI-generated content a “disinformation weapon” and criticized media outlets for amplifying unverified reports, including threats regarding broadcasting licenses over critical coverage. In some cases, official channels have shared videos that mixed real strike footage with clips from video games or movies, prompting accusations of blurring factual reporting. U.S. officials have emphasized exposing Iranian propaganda and maintaining alliance cohesion, though domestic debates over press freedom have intensified amid the conflict.
Additionally, proxies and broader ecosystems have contributed to the deluge. Remnants of Iranian-aligned networks, along with Russian and Chinese amplification, recycled tropes from earlier phases of regional tensions, adapting them to the 2026 events. Fact-checkers have documented an “astonishing” volume of AI-generated material, including fake images of strikes on U.S. bases in the Gulf, burning embassies, and captured soldiers. Engagement-driven creators have monetized viral fakes, further complicating efforts to separate truth from fiction in real time.
Consequently, the impacts have been profound. Disinformation has fueled polarization, sparked protests, and complicated public understanding of events on the ground, such as civilian casualties from strikes, including a deadly incident at an elementary school in Minab. It has sustained regime legitimacy inside Iran while sowing doubt internationally about the justification and conduct of the campaign. However, the sheer volume risks narrative fatigue, where audiences grow skeptical of all sources and retreat into partisan echo chambers.
In conclusion, these incidents underscore how disinformation has evolved into a core strategic tool in modern warfare. Iran has pursued the most aggressive offensive use of AI and proxies to compensate for military asymmetries, while Israel and the U.S. have leaned toward defensive controls and rebuttals. As the conflict continues, the erosion of shared truth not only hinders de-escalation but also raises urgent questions about media literacy, platform responsibility, and independent verification in an era where fabricated realities can influence outcomes as powerfully as missiles themselves. Enhancing these safeguards will prove essential to mitigate the long-term damage from this shadow battlefield.