Iranian information operations reveal structural changes to information warfare

Executive Summary

  • The conflict in Iran has seen a surge in pro-Iranian mis- and disinformation activity on social media, with AI-enabled tools allowing regime-aligned actors to scale content production and amplification at an unprecedented speed and volume. 

  • Iranian information campaigns are increasingly defined by speed as much as by credibility, as the regime and aligned-actors flood the information domain in an attempt to shape initial perceptions and establish control of the narrative.

  • The rollback of fact-checking mechanisms by major US platforms has complicated efforts to curb the spread of mis- and disinformation that are likely to persist in the near term.


SECURED is a security consultancy specialising in providing protective security for research and innovation entities of national security significance

If you are concerned about misinformation and disinformation campaigns affecting you or your organisation

CONTACT US


Social media has been saturated with pro-Iranian misinformation and disinformation campaigns/activity since 28 February. This aligns with the trend observed in the June 2025 Iran-Israel conflict, when a study found 72% of online misinformation circulated during the war favoured Iranian strategic narratives [LINK, LINK, LINK].

Mis- and disinformation campaigns are a key tool for states fighting asymmetrically in modern wars. Information campaigns allow the Iranian regime to project strength, shape perceptions of the conflict, and manufacture the appearance of battlefield impact, even when its military capabilities are outmatched. 

The current phase of the conflict has seen a notable surge in pro-Iranian mis- and disinformation. This has been facilitated in part by the rapid advancement of AI tools that enable the rapid production and dissemination of highly convincing synthetic images and videos. This enables the Iranian regime to manipulate the narrative faster, generate greater emotional impact, and achieve wider dissemination across platforms.

Discernible tactics

  • Four categories of Iranian mis- and disinformation are identifiable on social media during the current conflict. Each appears to serve a distinct intent and has a varying level of sophistication. 

  • The overarching objective across these activities appears to be narrative control: presenting Iran as the victim in the current conflict, while simultaneously projecting an image of Iranian military strength. 

  • This approach serves a dual purpose. Internationally, it seeks to generate fear, uncertainty, and doubt while resonating with an existing anti-Israeli sentiment in the West due to the Gaza conflict; a sentiment itself likely contributed to by Iran’s long-term information campaigns. Domestically, it aims to reinforce loyalty by showcasing Iranian strength and competence. 

Category Description Effectiveness Assessed Intent Detection Methods
Video game footage Footage from popular modern military simulation video games, most commonly Arma 3 and War Thunder [LINK, LINK], is circulated on social media and falsely portrayed as Iranian battlefield success. Low to medium. Compared with other forms of mis- and disinformation posts, this material is easier to identify as fabricated. To rapidly and widely disseminate the impression of Iranian battlefield success, shaping early perceptions and attempting to establish initial control of the narrative. Low resolution or pixelated visuals, often deliberately compressed to hide artifacts.

Exaggerated or unnatural explosions, smoke, or missile effects.

Lack of audio or audio inconsistent with real-world combat sounds.

Presence of historical or obviously inaccurate weapon systems.
AI-generated content Video and imagery created or modified by AI tooling are circulated on social media, falsely portraying and/or exaggerating Iranian battlefield successes and amplifying the perceived brutality of Israeli attacks. Medium to high. The sophistication of AI-generated content varies. While some material contains clear visual artifacts that make it identifiable as synthetic, more advanced examples can be difficult to detect without closer analysis or technical verification tools. To reinforce existing narratives of Iranian battlefield success, project military strength domestically, signal deterrence internationally, and frame Iran simultaneously as capable and victimised. Technical verification tools are often needed.

Visual inconsistencies such as exaggerated explosions or destruction.

Distorted or disproportionate objects.

Unnatural or monotone AI-generated voices.
Repurposed old or unrelated footage Old or unrelated videos from other conflicts are circulated on social media and misrepresented as Iranian battlefield success or evidence of Israeli or US defeats. Medium to high. Genuine, repurposed footage appears credible, though relevance to specific March 2026 events may be limited. To support narratives of Iranian battlefield success, reinforce perceptions of Israeli/US losses, and flood the information environment to complicate verification. Reverse image and key-frame analysis.

Inconsistencies in audio, language, signage, landmarks, uniforms, equipment, terrain or other environment factors to the claimed context.
Claims of assassinated key Israeli officials Unverified claims of assassinated key Israeli military and governmental officials are circulated online as evidence of Iranian covert capability. Low to medium. While the claims may briefly attract attention, they are generally unsubstantiated and are often debunked by reputable media outlets. To project a narrative of Iranian strength and create confusion and disruption within the information environment. These claims are likely aimed at Iranian and Israeli audiences, but they can also resonate internationally due to prevailing anti-Israeli sentiment in the West due to the Gaza conflict. Cross-referencing claims with official government statements and credible journalistic reporting can reveal inaccuracies.

Analysis and implications

Improving capabilities are challenging detection

  • Rapid advancements in AI-generated content’s quality and believability is simultaneously making the information environment structurally more vulnerable to manipulation. 

  • As synthetic realism improves faster than verification and detection methods within current social media platforms, high-quality AI-generated content is becoming increasingly difficult to discern. 

  • This enables such material to spread further and faster, shaping perceptions before fact-checking can occur.

Flooding the information environment is an effective tactic

  • Although the use of low-quality mis- and disinformation, such as video game footage and unsophisticated AI content, may appear to be counterproductive, it has strategic value. 

  • Rapidly flooding the information environment allows narratives to spread before verification can occur and create short-term confusion. 

  • In this context, speed increasingly matters as much as credibility in shaping initial perceptions and seizing the initiative in the information domain. 

Low sophistication disinformation acts as an entry point for audiences

  • Low-credibility material may also serve as an entry point for audience engagement. Even when users recognise such content as dubious or misleading, interactions through views, comments, or shares can amplify its reach and signal engagement to platform algorithms. 

  • This not only amplifies the content’s reach but also encourages algorithms to recommend similar material to both wider audiences and the user who originally engaged with it. 

Low sophistication disinformation can manipulate algorithms 

  • Low sophistication disinformation creates a foundation that more sophisticated disinformation can exploit. 

  • As lower-quality content can prime algorithmic recommendation systems, higher quality AI-generated and repurposed content is more likely to be directed to audiences who have already engaged with similar narratives. 

  • As this form of content is more realistic and persuasive, it is more likely to influence and reinforce perceptions over time. 

Fact-checking systems have been degraded, complicating enforcement

  • This structural vulnerability has in-part been facilitated by recent initiatives by several US social media platforms to reduce fact-checking mechanisms, aligning with a broader push for freedom of speech associated with the administration of US President Donald Trump [LINK, LINK]. 

  • The consequences of this drive appear to be already material, as Meta’s Oversight Board identified systemic failures in identifying AI-generated misinformation and disinformation at scale during the current phase of the Iran conflict [LINK, LINK]. 

  • Given the deregulation trajectory under the second Trump administration, it is difficult to see how this platform moderation and verification processes can be improved and limit the susceptibility of social media ecosystems to this form of manipulation.

Future trends

The impact of monetisation on disinformation remains unclear

  • Crosscutting across Iranian mis- and disinformation is the role of monetisation incentives on platforms, such as X (Twitter), where revenue-sharing mechanisms rewards high engagement, regardless of the accuracy or quality of the material.

  • Financial incentives can amplify the spread of mis- and disinformation, encouraging posts that maximise interaction rather than accuracy. For state-linked operations, this is a double-edged sword: narrative may disseminate further, but messaging can become diluted or misappropriated.

  • On 4 March, X updated its creator policies to withdraw monetisation from users who fail to label AI-generated war videos [LINK]. While this may curb opportunistic actors, it is unlikely to prevent state-linked operations at the source.

There is the potential for widespread de-sensitisation to conflict imagery

  • Repeated exposure to conflict imagery online, including graphic battlefield footage from Ukraine and other contemporary conflicts, may over time reduce the emotional impact of conflict mis- and disinformation.

  • While convincing AI-generated content can replicate military displays or battlefield scenes, its effectiveness may diminish over time as social media users grow habituated to such footage. 

  • As audiences become less responsive, states may need increasingly extreme or arresting material to achieve deterrence or domestic mobilisation goals.

Assassination disinformation is growing in importance

  • Given the growing importance of decapitation of military and government officials in modern US and Israeli operations, it is likely that there will be an increase in mis- and disinformation around assassinations. 

  • Iranian information operations are likely to include more unsubstantiated assassinations claims. This narrative mirroring replicates credible US and Israeli tactics, which increases plausibility and becomes a powerful disinformation template.

  • Disinformation focusing on assassinated Israeli military officials is likely to gain traction with Western audiences as these officials are associated with Israeli operations in Gaza.


Contact us

Secured is a UK-based organisation that provides strategic advisory services to organisations concerned about threats to the security of research, innovation, and investment.

Our security practitioners help entities secure their intellectual property, build operational and financial resilience, and cultivate a positive organisational security culture. 

We provide research on the national security implications of emerging technologies as part of our scientific and technical intelligence assessment capability.

Secured is part of Tyburn St Raphael Ltd, a boutique security consultancy.

info@tyburn-str.com

hello@secured-research.com

Next
Next

Assessing protective security in cloud-native remote-first companies