The late-February 2026 US–Israeli strikes on Iran have pushed the military use of artificial intelligence back to the centre of global debate. Anthropic, in court filings, said the US military used its tools in striking Iran shortly before the company was cut off from federal use. It is reported that in the 2026 Iran war, AI-enabled tools have been used as part of a broader operational architecture for intelligence fusion, target development, battle-damage assessment, and strike planning.
What makes this shift strategically significant is not merely the automation of military operations, but decision compression: the reduction of time between detection, analysis, recommendation, authorisation, and strike. The Financial Times reported that US forces struck more than 2,000 targets in four days in Iran, contrasting sharply with the much slower tempo of earlier campaigns.
Two weeks before the Iran war, the 62nd Munich Security Conference, held on 13–15 February 2026, had already put AI at the centre of conference technology discussions, including the 2026 Innovation Night on ‘Winning the AI Arms Race’.
As AI moves deeper into intelligence and targeting workflows, governance debates are no longer mainly about transparency, bias, or consumer safety. They have extended to battlefield accountability, civil-military oversight, industrial dependence on frontier AI firms, and the absence of effective international guardrails for AI-enabled warfare.
AI militarisation as a Core Security Issue
The official agenda for this year’s Munich Security Conference shows that technology—especially AI—has become a first-order security issue, not a secondary innovation topic. The conference’s technology track explicitly described AI as a ‘decisive factor’ in systemic competition, while the 2026 Munich Security Report, Under Destruction, situated emerging technologies within a broader context of geopolitical fragmentation and power rivalry. Questions of model safety and standards remain relevant, but they become inseparable from force posture, defence procurement, intelligence integration, and alliance politics.
This shift in policy discourse was grounded in operational realities already emerging across multiple conflict theatres. Beyond the United States and Israel, the use of AI in military operations is already widespread. In Ukraine, AI-enabled drones and decision-support systems are used for target identification, battlefield analytics, and even semi-autonomous strikes. Russia has integrated AI into command-and-control systems to support predictive decision-making, while India has employed AI for intelligence fusion, artillery targeting, and operational planning in its conflict with Pakistan. Together, these cases indicate that AI is not an experimental capability but an increasingly normalised component of contemporary warfare.
Taken together, these operational developments are increasingly reflected in policy and analytical assessments beyond individual conflict cases. The Council on Foreign Relations argued in February 2026 that military AI adoption is now outpacing international cooperation on governing, regulating, and managing military AI, with states already experimenting with scaling and deploying AI-enabled capabilities while multilateral governance efforts continue to move slowly. CSIS’s Futures Lab has similarly framed AI as reshaping conflict dynamics across military operations, strategic decision-making, and diplomacy. In this context, the central policy challenge becomes whether governance of AI in warfare can keep pace with the systems it seeks to constrain.
AI as War Infrastructure
AI is becoming embedded in warfighting infrastructure as a cross-cutting layer spanning the collection of battlefield data, the processing of that data into actionable intelligence, the prioritisation of targets and threats, and ultimately command and decision support. The point is not that AI fully replaces commanders. It can profoundly reshape what commanders see, how fast they see it, and how much time they have to contest machine-generated recommendations.
The US’s Project Maven is a case in point. The Department of Defense established Project Maven in 2017 to accelerate the integration of big data, AI, and machine learning into military workflows. While initiated as a Pentagon program, Maven has from the outset depended on a wider ecosystem of private contractors, and has evolved into a close state–industry collaboration involving firms such as Palantir, which now plays a central role in developing and scaling its operational capabilities. The National Geospatial-Intelligence Agency (NGA) now describes it as the Pentagon’s flagship AI project for integrating computer vision into military analysis. NGA states that Maven is already used to ‘automatically detect, identify, characterize, extract, and attribute features and objects in imagery and video’, producing ‘trusted Geospatial Intelligence (GEOINT) at speed and scale’. NGA also says that Maven is already generating large volumes of detections for warfighter requirements across multiple operational locations and is lowering latency while feeding data into other military platforms. In other words, AI is no longer just helping analysts work faster; it is becoming a connective link between sensors, analysts, and commanders. That is precisely what gives rise to decision compression.
New developments in the Iran war represented a new phase in this trajectory. It is reported that the US military used AI systems, including Palantir’s Maven software and Anthropic’s Claude, to process intelligence from satellites, drones, and other sensors, helping to plan operations and assess battle damage in near real time. Not limited to the Iran war, Anthropic’s Claude was also reported to have been used in the US operation to capture former Venezuelan president Nicolás Maduro via Palantir’s systems. Similarly, a 2024 RUSI article noted that the Israeli AI system, often referred to as ‘Habsora’ or ‘The Gospel’, is reported to process multiple intelligence streams to generate targets at far greater speed than human analysts can achieve on their own.
The New Defence-Industrial Stack
The Munich conference also highlighted a second major transformation: AI militarisation is creating a new defence-industrial stack that includes governments, armed forces, cloud providers, data companies, and frontier model developers. This differs from the traditional defence ecosystem, which was dominated by large prime contractors building physical platforms. In the AI era, military advantage increasingly depends on software integration, data pipelines, model access, cloud accreditation, and the speed of iteration.
The emerging dispute between Anthropic and the Pentagon provides a particularly revealing case study. According to Reuters’ reports, in February 2026, the Pentagon was pushing major AI companies to make their tools available on classified networks with fewer restrictions, and battlefield use cases under discussion spanned autonomous drone swarms, robots, and cyber operations. After Anthropic refused to permit use cases such as autonomous lethal warfare and mass surveillance, the US government cut off its tools and struck a deal with Anthropic’s rival OpenAI to deploy models on Defense Department networks. The deal later added layered protections and explicit red lines against autonomous weapons use under that agreement. Anthopic has sued the US government for cutting it off.
The centre of gravity of national security is shifting toward a hybrid ecosystem in which AI model developers and data-platform firms are not peripheral suppliers but key actors in the production of military capability. This raises hard questions about accountability, contractual control, procurement leverage, and whether private terms-of-service can meaningfully constrain sovereign military use once a model is embedded in classified environments.
Governance Shifts in the Age of AI Militarisation
The rapid militarisation of artificial intelligence is creating structural challenges for global security governance. As AI technologies move from experimental deployment into operational military systems, existing governance frameworks—largely designed for civilian digital technologies—are struggling to keep pace.
At least four major shifts are reshaping the global AI governance landscape.
From Ethics to Destabilisation Dynamics
First, AI militarisation is transforming the policy agenda of global AI governance. For much of the past decade, international discussions on AI governance centred on ethical principles, algorithmic transparency, privacy protections, and consumer safety. As AI technologies enter military command systems and intelligence operations, however, the focus of governance debates is shifting toward strategic stability, escalation management, battlefield accountability, and command responsibility. The Council on Foreign Relations (CFR) has noted that multilateral governance frameworks continue to evolve at the pace of traditional diplomacy, while states are already integrating AI systems into operational military planning and intelligence analysis.
At the same time, the opacity surrounding military AI capabilities further undermines confidence-building efforts: governments rarely disclose meaningful information about the scope, testing, or deployment of such systems, making it difficult for states to assess each other’s capabilities or intentions. This lack of transparency increases the risk of misperception and exacerbates classic security dilemma dynamics, where uncertainty about an adversary’s capabilities and intentions leads states to adopt worst-case assumptions, accelerate their own military AI development, and engage in competitive buildup that ultimately undermines overall security.
This growing mismatch between diplomatic governance processes and real-world military deployment risks widening the gap between governance discourse and strategic practice amid the destabilisation dynamics.
Institutional Expansion of AI Governance
Second, the institutional architecture of AI governance is expanding beyond traditional technology regulators. AI governance was previously driven largely by digital ministries, regulatory agencies, and international standards organisations. As AI becomes embedded in national security capabilities, however, a wider set of actors—including defence ministries, intelligence agencies, export-control authorities, procurement institutions, and military alliances—are becoming crucial participants in shaping governance rules. In practice, this means that future norms governing military AI may emerge not only through international AI summits and technical standards bodies, but also through defence procurement frameworks, interoperability standards among allied militaries, classified technology partnerships, and military doctrine development.
Decision Compression
AI militarisation is altering the risk model of modern warfare. The International Committee of the Red Cross (ICRC) has identified three particularly sensitive domains of AI military application: autonomous weapon systems, AI-assisted military decision-making, and AI-enabled information and communication operations. Of these, AI-assisted decision-making systems may represent the most underappreciated risk. Although such systems typically maintain a ‘human in the loop’, their ability to process vast volumes of data and generate recommendations at machine speed can significantly narrow the space for meaningful human judgment. This phenomenon, which is often described as decision compression, has the potential to accelerate crisis escalation and reduce opportunities for political or military de-escalation.
Importantly, existing international security mechanisms have not adapted to this acceleration. Many confidence-building measures developed during the Cold War—such as military hotlines, transparency arrangements, and arms control verification regimes—were designed to manage escalation dynamics associated with relatively slow-moving, observable, and platform-based weapons systems. These mechanisms relied on the material visibility of military capabilities and the availability of time for human deliberation and crisis communication. However, emerging research suggests that AI-enabled military systems may significantly compress decision-making timelines and accelerate the pace of conflict, thereby undermining the assumptions on which these legacy mechanisms were built.
The Emerging Legal Accountability Gap
Fourth, AI militarisation is exposing a growing legal accountability gap in international law. Existing international legal frameworks—including the Rome Statute of the International Criminal Court (ICC)—assign responsibility for war crimes to human actors such as commanders, political leaders, and operational personnel. However, international law currently contains no standalone legal category addressing AI-assisted war planning or battlefield decision-making, and algorithmic targeting systems. As AI systems become embedded in intelligence analysis, target prioritisation, and operational planning, the chain of responsibility becomes more diffused. Both the ICRC and the United Nations Office for Disarmament Affairs (UNODA) have warned that autonomous and AI-enabled military technologies raise significant challenges for legal review, traceability, and the preservation of meaningful human control over the use of force.
Existing legal frameworks, particularly International Humanitarian Law (IHL), were developed around conventional weapon systems. While core principles such as distinction, proportionality, and military necessity formally apply to AI-enabled systems, there remains little operational clarity on how these principles should be implemented in contexts such as algorithmic target identification, automated decision-support systems, or semi-autonomous weapons.
Even though there are ongoing discussions on lethal autonomous weapon systems (LAWS), in situations where AI-generated recommendations influence lethal decisions, determining legal responsibility may become significantly more complex—particularly when decision-making is shaped by opaque models, degraded data inputs, or privately developed systems embedded deep within the military ‘kill chain’. This emerging attribution problem represents one of the most significant governance gaps in the age of AI-enabled warfare.
Conclusion
Taken together, these shifts suggest that the militarisation of AI is not merely introducing new technologies into warfare, but a transformation in how military action is justified and executed. As AI systems increasingly mediate the flow from perception to decision to strike, responsibility risks becoming diffused across data pipelines, models, and institutional processes. This creates a dangerous form of ‘normalised’ decision-making, in which speed, optimisation, and procedural compliance can obscure ethical judgment and political accountability.
Addressing this requires more than extending existing rules: it demands new forms of transparency, accountability, and human control that are capable of operating at the tempo and opacity of AI-enabled warfare. If existing governance frameworks fail to adapt, the risk is not only faster escalation, but a gradual erosion of meaningful human responsibility in the use of force.