Posted on: 3 March 2026
Yesterday, 2 March 2026, anyone running digital operations in the Persian Gulf woke up to a problem that was not in any playbook. Two Amazon Web Services availability zones in the United Arab Emirates down. The Bahrain data centre experiencing connectivity issues. Abu Dhabi Commercial Bank reporting its platforms and mobile app inaccessible. Financial institutions across the region flagging service disruptions. Core cloud services, the ones running databases, storage and virtual machines, showing error rates that AWS itself described as "significant". Recovery would take, in the company's own words, "many hours". The official cause was surgical in its understatement: "objects struck the data centre, creating sparks and fire". AWS did not specify what those objects were. It did not need to.
Those "objects" were almost certainly debris from Iran's retaliatory strikes, launched after the United States and Israel carried out Operation Epic Fury on Saturday 28 February: the bombing campaign that killed Supreme Leader Ali Khamenei and dozens of senior regime officials. Iran responded with waves of missiles and drones targeting American and allied bases across the region: the UAE, Qatar, Kuwait, Saudi Arabia, Bahrain. Airports, ports and residential areas were hit. And for the first time in history, so was a commercial data centre hosting cloud infrastructure for half the region.
This is where the story becomes interesting for anyone who can read the connections between systems that are supposed to be separate but no longer are.
The operation against Iran was planned, according to the Wall Street Journal citing government sources, with the assistance of Claude, the artificial intelligence model built by Anthropic. US Central Command used Claude for intelligence assessments, target identification and battlefield simulations. None of this is particularly surprising: AI is now an integral part of American military planning, and Claude was the only model integrated into the Pentagon's classified networks, through partnerships with Palantir and Amazon Web Services.
The detail that transforms this from military reporting into a case study in systemic complexity is the timing. Claude was used to plan and support the strike on Iran just hours after President Trump had ordered every federal agency to immediately cease using Anthropic's technology, calling the company a national security risk. Defence Secretary Hegseth went further, designating Anthropic a "supply chain risk", a classification normally reserved for companies connected to foreign adversaries. No American company had ever received this designation before.
Why the paradox? Because you cannot remove an AI system embedded in an army's classified networks the way you uninstall an app from a phone. Defence officials themselves acknowledged that a complete technical withdrawal from Claude was "operationally infeasible" at short notice. Separating the Pentagon from Claude, as one analyst observed, amounts to open-heart surgery. This is why Trump granted a six-month transition period, even as the public rhetoric spoke of "immediate" cessation.
Pull the thread a little further back and the causal chain extends in revealing ways.
The confrontation between Anthropic and the Pentagon erupted in February, but its roots lie in the discovery that Claude had been used during the operation to capture Venezuelan President Nicolas Maduro in January 2026. Eighty-three people killed, including forty-seven Venezuelan soldiers. When the news surfaced, an Anthropic executive contacted Palantir to ask whether the technology had been deployed in the operation. The answer triggered a chain reaction.
Anthropic did not refuse to work with the Pentagon. It refused to remove two specific clauses from the contract: no mass surveillance of American citizens, and no fully autonomous weapons systems without a human in the decision chain. Two limitations the company described as consistent with democratic values and with the current limits of technological reliability. The Pentagon wanted access "for all lawful purposes" without exceptions. The Defence Department's final contract offer, according to Anthropic, contained legal language that would have allowed those safeguards to be overridden at will.
Dario Amodei, Anthropic's CEO, wrote that the Pentagon's threats "do not change our position: we cannot in good conscience accede to their request". Emil Michael, Undersecretary of Defence for Research and Engineering, responded publicly by calling Amodei "a liar with a God complex". In a detail that captures the disorder of those final hours, while Hegseth was posting the supply chain risk designation on X, Michael was reportedly still on the phone to Anthropic offering a back-channel deal.
And here the thread takes us back to the origin. In July 2025, the Pentagon had signed a contract worth up to two hundred million dollars with several AI companies including Anthropic, OpenAI, Google and xAI. But Claude had ended up in a unique position: the only model approved to operate in classified environments. The integration ran deep, through Palantir's systems and AWS's top-secret cloud. A contract that at the time looked like a routine technology procurement had created, without anyone planning it, a structural dependency that within eight months would become impossible to unwind at the very moment it mattered most.
Viewed from above, the sequence has the classic structure of a black swan: an event that seems obvious in retrospect but that none of the actors could have predicted because each was operating inside their own decision silo.
The Pentagon signs a contract with an AI company in 2025 and integrates it into classified networks: a rational decision to modernise operations. Anthropic maintains ethical clauses in the contract: a decision consistent with its corporate mission. Trump uses the dispute as political leverage and bans Anthropic: a decision consistent with his anti-progressive-tech rhetoric. The Pentagon cannot remove Claude in time before the Iran operation: an inevitable consequence of integration depth. Iran retaliates by striking bases and infrastructure across the region: a predictable military response. A missile or drone hits an AWS data centre in the Emirates: a consequence of the physical proximity between American military infrastructure and civilian cloud infrastructure in the same region.
Every decision is perfectly rational in its own context. The aggregate outcome is one that nobody designed: the artificial intelligence that a government banned is used to plan an attack that triggers retaliation that hits the cloud infrastructure on which that artificial intelligence was running, knocking out digital services across half the region including the banks.
There is a deeper layer of reading that deserves attention.
Iran was already in a state of unprecedented digital isolation before the first American missile fell. Since January 2026, the regime had imposed the most severe internet blackout in its history to suppress popular protests that erupted in late December 2025. Ninety-two million citizens cut off from the network. Mobile networks, messaging services and landlines disabled; even Starlink blocked through a massive GPS jamming operation. The estimated economic cost according to NetBlocks exceeded thirty-seven million dollars per day. Online sales collapsed by eighty per cent. The Tehran Stock Exchange lost 450,000 points in four days. The government was building what analysts call a "two-tier internet": global access only for those with security clearance, a surveilled domestic intranet for everyone else. Foreign telecommunications companies operating in the country had begun quietly withdrawing. An experiment in total digital isolation that not even China has attempted in this form or at this speed.
Iran disconnected itself to control its own population. Then it was bombed with operations planned in part by AI. Then its retaliation disconnected pieces of other people's digital infrastructure. The circle closes with a symmetry that has something grotesque about it.
But the structural point is not about Iran. The point is that 28 February 2026 established a precedent that redefines the very concept of critical infrastructure in wartime. When a commercial data centre serving banks, businesses and public services for an entire region is struck by an act of war, the boundary between military and civilian infrastructure dissolves. Not in theory: in practice, in balance sheets, in banking apps that stop working, in cloud services returning errors.
Meanwhile, OpenAI signed its own Pentagon deal within hours of the Anthropic ban. Sam Altman stated that the contract contains the same two limitations Anthropic had been demanding, mass surveillance and autonomous weapons, but with different wording. The Pentagon can use the AI "for all lawful purposes" and the safeguards are built into the technical architecture rather than the contract text. It is a subtle distinction that could mean everything or nothing, but which allowed both parties to declare victory. The cynicism with which the entire system simply carried on, swapping supplier without resolving any of the structural questions, is perhaps the most revealing detail of all.
The question that none of the decision-makers in this chain thought to ask is simple in its formulation and vertiginous in its implications: how fragile is a system in which the same cloud hosts military artificial intelligence and civilian banking services? In which the same geographical region that serves as an operational base for military strikes hosts the data centres on which half the Middle East's digital economy depends? In which a technology contract signed in July becomes an irremovable dependency by February?
These are questions that would require a map of the interdependencies between military systems, cloud infrastructure and regional digital economies. That map does not exist. Nobody has ever drawn it because generals do not think about data centres, cloud engineers do not think about missile trajectories, and tech company CEOs do not think about geopolitical retaliation chains. Everyone optimises their own silo. The overall system accumulates fragilities that only become visible when they break.
The lesson from this week is not about Anthropic or the Pentagon or Iran. It is about the kind of world we have built without realising it: a world in which systems are so tightly coupled that a contract dispute between a San Francisco company and a Washington department can propagate, through a chain of decisions each rational in its own domain, all the way to knocking Abu Dhabi's banking services offline.
Anyone managing strategic decisions in this kind of complexity does not need predictions. They need a fragility map. And the first step towards building one is to stop assuming that the systems we operate in are separate just because we drew them on different org charts.