OPINION — Since U.S. and Israeli strikes started in opposition to Iranian army and nuclear infrastructure in late February, two wars have been operating concurrently. One is kinetic. The opposite entails one thing the world has not totally reckoned with: the systematic use of synthetic intelligence to fabricate actuality, at scale, in actual time, throughout energetic armed battle.
Inside days of the opening strikes, AI-generated video of missile impacts on the usAbraham Lincoln was spreading throughout TikTok. Fabricated footage of downed U.S. fighter jets circulated on Fb and Instagram. Tehran Instances revealed what seemed to be satellite tv for pc imagery of a U.S. radar base in Qatar exhibiting structural injury from the strikes. BBC Confirm confirmed the picture was AI-generated, constructed from real satellite tv for pc knowledge of a unique location and manipulated utilizing Google AI instruments. None of it was actual. All of it unfold.
The social media intelligence agency Cyabra documented greater than 145 million views of Iranian-linked disinformation content material in beneath two weeks. The New York Instances recognized over 110 distinctive deepfakes selling pro-Iran narratives in the identical window. These will not be the crude affect operations of a decade in the past. They’re the product of an adversary that has been constructing this functionality methodically and has now deployed it at wartime scale.
Understanding why this issues requires a brief detour via what Iranian propaganda really used to appear to be.
Throughout the Iran-Iraq Conflict, Tehran’s media technique relied on radio broadcasts and print. Its efforts to influence Iraqi Shia populations to shift allegiances have been largely unsuccessful. Restricted attain, poor concentrating on, no suggestions loop. Throughout the 1991 Gulf Conflict, Iraq’s disinformation was described by students as excessive exaggerations simply ridiculed within the Western press. Baghdad claimed it had shot down dozens of allied plane. The press verified it had not. That was the cycle.
The digital period introduced sock puppets and recycled footage. These operations required important human labor and have been detectable with primary verification instruments. An account posting video from the 2015 Syrian battle whereas presenting it as one thing present may very well be caught by reverse picture search in minutes. The barrier to debunking was low.
December 2023 marked the primary actual break. Iran’s IRGC-linked group Cotton Sandstorm hijacked streaming providers within the UAE, UK, and Canada and broadcast a deepfake newscast. An AI-generated anchor delivered Tehran’s narrative on the Gaza battle to viewers who believed they have been watching authentic information. Microsoft, analyzing the operation afterward, referred to as it the “first Iranian affect operation the place AI performed a key part” and a “quick and important growth” of Iranian capabilities.
June 2025 accelerated the mannequin. The European Digital Media Observatory documented the 12-day Israel-Iran battle as “The First AI Conflict,” the primary time in a significant battle that extra misinformation was created via generative AI than via conventional strategies. The three most-viewed faux movies collectively amassed over 100 million views.
March 2026 builds on that precedent, at considerably larger scale, with significant tactical improvements added.
The primary is coordinated structure. Cyabra’s forensic evaluation discovered tens of hundreds of inauthentic accounts distributing similar AI-generated belongings concurrently throughout each main platform, with synchronized posting home windows and coordinated hashtag clusters pointing to centralized manufacturing. And it turned clear {that a} notable proportion of accounts amplifying the marketing campaign have been inauthentic. The content material was not natural. It was engineered.
The second is what journalist Craig Silverman has referred to as “forensic cosplay”: the fabrication of technical-looking verification instruments designed to discredit genuine proof. In a single documented case, fabricated heatmap visualizations have been deployed to label images taken by credentialed photojournalists at a strike website in jap Tehran as AI-generated. AI forensics specialists who reviewed the heatmaps discovered them semantically incoherent. The thread nonetheless reached a whole bunch of hundreds of views earlier than corrections may observe. In a second case, a faux “Empirical Analysis and Forecasting Institute” revealed fabricated Error Degree Evaluation of a New York Instances {photograph}, conducting the evaluation on a screenshot of an Instagram submit slightly than the unique picture. That methodological error renders the output meaningless. The false conclusion nonetheless attracted over 600,000 views on X.
This can be a totally different class of operation from making false issues look actual. It’s making actual issues look false. The verification infrastructure itself turns into the goal.
The third ingredient is the amplification mannequin. Iran doesn’t function alone. The Basis for Protection of Democracies documented what it calls an “authoritarian media playbook” wherein Russian bot networks launder Iranian content material whereas Chinese language state-aligned media echoes anti-U.S. narratives. No centralized coordination is required. Every actor pursues its personal anti-Western targets, and the compounding impact throughout the worldwide data surroundings far exceeds what any single actor may obtain independently. In June 2025, Cyabra documented an Iranian bot community within the UK that had been spreading pro-Scottish independence and anti-Brexit content material. It went utterly silent for sixteen days following the army strikes on Iran, then returned with explicitly pro-Iran messaging. State-directed, clearly. Deniable, rigorously.
What’s most consequential right here isn’t the quantity of Iranian deepfakes. It’s the underlying strategic logic of what they’re designed to perform.
Conventional propaganda is constructed to influence audiences towards particular false beliefs. Iranian AI operations on this battle seem calibrated to attain one thing extra sturdy: the destruction of the shared evidentiary basis that makes accountability attainable in any respect. When any picture can plausibly be AI-generated, when forensic instruments will be fabricated, and when platforms can not distinguish genuine from artificial at scale, the equipment of verification collapses. You do not want to win arguments about what occurred. You solely want audiences to conclude that nothing will be recognized.
Regulation students Danielle Citron and Robert Chesney named this the “Liar’s Dividend” in 2018: as deepfake consciousness grows, actors achieve the flexibility to dismiss real proof as fabricated. Empirical analysis revealed within the American Political Science Overview in 2025 confirmed the speculation. False claims of misinformation do generate statistically important will increase in public help for political actors dealing with accountability. This was largely centered on text-based scandals on the time, and with the dramatic enhancements in artificial photos and video since that point, one can speculate {that a} comparable impact performs out immediately on our screens. Iran has operationalized this precept. By circulating sufficient clearly artificial content material to seed generalized skepticism, it creates cowl for dismissing genuine documentation of what really occurred.
That logic runs in two instructions on the identical time. Overseas, Iran deploys deepfakes to venture army functionality and deny accountability for strikes it conducts. At house, the identical operation insulates the regime from documentation of its personal conduct towards its residents. Web connectivity in Iran fell to roughly one p.c of regular ranges by early March, per NetBlocks. That close to blackout creates an data vacuum. Deepfakes and fabricated forensic evaluation fill that vacuum whereas concurrently rendering genuine protest documentation dismissible as artificial. The regime doesn’t have to suppress each picture from the January crackdown. It solely wants to make sure that any picture is plausibly deniable.
On the identical time, detection has not saved tempo. Danny Citrinowicz, a senior researcher at Tel Aviv College’s Institute for Nationwide Safety Research, said this January: “There is no such thing as a capacity immediately to systematically determine AI-driven affect campaigns.” Meta’s Oversight Board formally dominated its deepfake detection “not sturdy or complete sufficient” for the rate of misinformation throughout armed conflicts. The EU AI Act’s labeling necessities for AI-generated content material don’t grow to be enforceable till August 2026. This battle started months earlier than that.
The U.S. is in the midst of restructuring the way it organizes the counter-influence mission. The controversy over the suitable scope of that work (together with considerations about whether or not some earlier approaches crossed into home speech territory) has been honest, and it crosses political traces. And the talk is essential, as we navigate delicate points that may take a look at the boundaries of free speech. However the timing is essential as properly. A brand new institutional structure for this essential mission remains to be being designed. And Iran’s marketing campaign isn’t pausing whereas the debates proceed.
Wherever U.S. coverage lands on the query of combatting disinformation and deepfakes, three issues can be true about this battle when it’s ultimately analyzed in full.
The first strategic goal of Iran’s data marketing campaign is epistemic disruption, the deliberate degradation of the viewers’s capability to type dependable beliefs, not persuasion towards particular false conclusions. That could be a materially totally different downside from countering conventional propaganda, and it requires totally different institutional responses.
The Russia-China-Iran amplification mannequin is a template, not an anomaly. Future conflicts involving any permutation of these actors, or their proxies, will make use of variants of this structure. Convergent anti-Western pursuits are adequate to drive convergent conduct. Coordination is non-compulsory.
Detection instruments are actually themselves a weapons class. The fabrication of forensic verification instruments to discredit genuine proof represents a qualitative escalation. Provenance infrastructure, not detection algorithms alone, can be required to deal with it.
The hole between adversary functionality and institutional response is actual and measurable. Deepfake incidents via Q1 2025 had already exceeded all of 2024’s complete. Bot visitors surpassed human internet exercise at 51 p.c. The knowledge surroundings is, in a measurable sense, majority-synthetic. Constructing the cognitive safety structure to function in that surroundings isn’t a platform moderation downside. It’s a nationwide safety crucial, and it deserves to be handled as one.
Views expressed listed below are the creator’s alone and don’t signify the positions or insurance policies of the U.S. Authorities or the Central Intelligence Company.
The Cipher Temporary is dedicated to publishing a spread of views on nationwide safety points submitted by deeply skilled nationwide safety professionals. Opinions expressed are these of the creator and don’t signify the views or opinions of The Cipher Temporary.
Have a perspective to share primarily based in your expertise within the nationwide safety area? Ship it to Editor@thecipherbrief.com for publication consideration.
Learn extra expert-driven nationwide safety insights, perspective and evaluation in The Cipher Temporary
