The Latest Weapon in the Iran War Is AI-Generated Misinformation
A photograph of a massive explosion at an Iraqi airport; satellite images depicting damage to a U.S. Naval base in Qatar; video of Iranian ballistic missiles striking the center ofTel Aviv. These are all images that have circulated in the past week since the Trump administration attacked Iran. And none of them are real.
These images — along with many more — were created or manipulated by AI, spreading misinformation about what is actually happening in and around Iran, and they are increasingly becoming a problem for those trying to distinguish truth and reality from lies and propaganda.
The spread of misinformation has always been a part of warfare, as conflicting sides battle for the public’s support while launching their bombs. But now, generative AI has made the ability to fake images and videos easier than ever before. Gone are the days when one would need Photoshop skills to create a false narrative. And with social media, these manipulated images can travel across countries in seconds. While bad actors might be intentionally attempting to sow discord, there are exponentially more people who are unknowingly sharing it. This, combined with a White House intent on spreading propaganda, makes for an information ecosystem that can feel overwhelming and confusing.
“We have reached a level of realism in video, audio, and image deepfakes that for most people, it is not discernible from fact,” says Rumman Chowdhury, a prominent AI researcher and former head of ethics at X (when it was still known as Twitter). “While AI companies have agreed to watermarking and other methods of verification, they are not built with the consideration of how users interact with social media.”
“This is particularly dangerous when considering situations like the war in Iran,” adds Chowdhury. “Most Americans are likely entering with low information and probably biased and prejudiced information. Fake media will only confuse and compound these biases.”
‘Shallowfakes’
On Feb. 28, Iran’s state-aligned newspaper Tehran Times shared satellite images, which supposedly showed the destruction left behind after an Iranian drone struck American radar equipment at a U.S. naval base in Qatar.
BBC Verify, a team of journalists dedicated to fact-checking images, has been tracking and mapping attacks related to the U.S. and Israel’s war on Iran, and labeled this satellite imagery as an AI-generated fake. The team explained that the altered image was based on real satellite imagery of a U.S. base, but it was edited with Google AI to falsely depict damage.
These days, it’s not as simple as people getting fed deepfakes and being fooled. Political scientist Steven Feldstein says that as people have become savvier about AI, the disinformation content creators have also become more sophisticated in how they present things, resulting in a “shallowfake,” which is a more subtle manipulation.
“Rather than present something that would look completely false, [they] present shades of the truth, manipulate what’s there,” says Feldstein. Meaning content creators provide details and nuance that’s good enough to get past people’s bullshit detectors, but could still be misconstruing reality to represent a specific viewpoint. This can happen by only slightly manipulating images, like a real Iraqi airport photo which depicted smoke over a U.S. military base in Iraq, but on March 1 was changed using AI to show a giant fireball explosion. Or it could be sharing an image out of context, like using an old photo and saying it just occurred.
“You’re seeing that happen in increasing levels,” says Feldstein, author of The Rise of Digital Repression: How Technology is Reshaping Power, Politics, and Resistance. “It’s become very sophisticated and also a critical part of geopolitics.”
Feldstein says that the 12-Day War, when Israel and the U.S. attacked Iran in June 2025, showed how quickly AI-generated content can spread. BBC Verify’s Shayan Sardarizadeh told the Global Investigative Journalism Network that attack was the “first example of a major global conflict where we were seeing more misinformation being produced using AI than in traditional ways.” It marked a “new era in the way AI-generated content is being used in the wake of a major breaking news story,” said Sardarizadeh, whose team saw several AI-generated videos and images misleading people with “millions and millions of views.”
Chowdhury, who was the former U.S. science envoy for AI, says we are already seeing “agents of disinformation all over social media pushing particular agendas.” She points to when X released a location feature this past November, which uses IP addresses to show where accounts are based. “It turned out a lot of American right-wing influencers are in Africa, Bangladesh, Russia, and Ukraine.”
Both Chowdhury and Feldstein say that the blurring of fiction and reality can make it so when people see a real video, they might claim it is fake. If you can’t trust your own eyes, it becomes harder to challenge your own strongly held beliefs.
“It’s now to a point where nothing that comes in beyond your own pre-existing narrative is accepted as something that is truthful,” Feldstein says, “and that’s just as harmful, as well.”
‘War isn’t a video game’
Misinformation does not always rely solely on AI. Sometimes, for example, a screenshot from a video game can be circulated and shared as if it is a real photograph of destruction. And then there’s just outright propaganda, which has grown exponentially under Trump. During the ICE raids in Minnesota, the Trump administration relied on cruel memes and AI-generated images to try and sway public opinion. They even did their own version of a shallowfake, digitally altering a photograph of a woman being arrested to make it appear as if she was crying, when she was not.
Then came Iran. On March 4, the White House released a video on its official X account merging real clips of Iran missile strikes with footage from the Call of Duty video game. Halfway through, a choppy voiceover says, “We’re winning this fight.”
On March 5, the White House released another video, this time celebrating “justice the American way” with clips from movies and TV series like Braveheart, Breaking Bad, Iron Man, and Gladiator.
The war has already left more than 1,000 people dead, including more than 100 Iranian schoolgirls, according to Iran state media, and at least six American service members.
“War isn’t a video game,” tweeted military veteran and Barstool Sports podcaster Connor Crehan. “The consequences of war are final. I wish we didn’t treat it with such a cavalier approach.”
Feldstein says that he’s increasingly seeing information and images being used to mobilize action and that social media has made it so rhetoric can spread extremely quickly, before anyone can stop and figure out the original source of the content. If you don’t know who is making these claims, and if it’s a credible news source, it’s hard to tell whether the narrative being pushed is one-sided and from a particular viewpoint that could be contested. He adds that the president of the United States and Israel’s prime minister have used images and motifs to try and call on Iranian citizens to rise up against their government. “The U.S. is not [currently putting] troops on the ground, but it is relying on information transmission as a means to mobilize change on the ground in terms of Iran’s government,” says Feldstein. “You can see how high the stakes are when it comes to how quickly that information is digested and it [spurring] action.”
And of course, he points out, there’s an enormous humanitarian risk to bad information, beyond political manipulation. People living in areas of armed conflict need to know where it’s safe to seek shelter, where drones are attacking, if they need to evacuate. It’s important to have information you can trust.

