AI made the Iran strikes faster than any war in history, but 165 people in a school just paid the price for that speed

Photo by Matheus Bertelli on Pexels.
The trade-off can be lethal.
AI-powered military strikes have entered a new era with the recent attacks on Iran, but the speed has come at a devastating cost, 165 people, many of them children, were killed when a missile hit a school in southern Iran.
The US military reportedly used Anthropic’s AI model, Claude, during the strikes. According to The Guardian, this technology “shortens the kill chain”, collapsing the entire process from target identification to launching a strike into minutes or seconds, something academics call “decision compression.”
Craig Jones, a kill chain expert from Newcastle University, said, “The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought.” This allowed the US and Israel to launch nearly 900 strikes on Iranian targets in the first 12 hours alone. Israeli missiles also killed Iran’s supreme leader, Ayatollah Ali Khamenei, during this period.
The speed of AI warfare brings serious ethical risks that human oversight cannot keep up with
The AI systems used in these strikes are capable of analysing large amounts of data, from drone footage and phone intercepts to human intelligence. Palantir, a war-tech company, built a system with the Pentagon that uses machine learning to identify targets, recommend weapons, and even assess the legal basis for a strike.
Palantir integrated Claude into this system to “dramatically improve intelligence analysis and enable officials in their decision-making processes.” However, this speed raises serious ethical concerns. David Leslie, a professor of ethics, technology and society at Queen Mary University of London, warns about “cognitive off-loading”, the idea that human decision-makers may feel removed from the consequences because the machine has done all the thinking.
This could lead to experts simply “rubber-stamping” automated strike plans, with very little time to properly review them. The school strike in southern Iran, which state media said was near a military barracks, has put these concerns into focus. The United Nations called it “a grave violation of humanitarian law,” and the US military said it is looking into the reports.
Senator Chris Murphy has also warned about the dangerous consequences of Khamenei’s death, saying the fallout from the conflict is far from over. There has also been friction between the US government and the tech companies involved.
The US administration had previously said it would remove Anthropic from its systems because the company refused to allow Claude to be used for fully autonomous weapons or for surveillance of US citizens. Despite this, Claude stayed in use until it could be phased out. Anthropic’s rival, OpenAI, then quickly signed its own deal with the Pentagon.
Iran has claimed to use AI in its missile systems since 2025, but its AI programme is reportedly far behind that of the US and China, largely because of international sanctions. Beyond the human cost, analysts have also raised concerns about how the Iran war is driving a new oil crisis that could have serious financial consequences worldwide.
Prerana Joshi, a research fellow at the Royal United Services Institute, notes that AI is being rolled out rapidly across defence systems worldwide, in logistics, training, and decision-making, allowing officials to “improve the productivity and efficiency of what they do” by “synthesizing data at a much faster pace.”





Published: Mar 3, 2026 03:15 pm