[AI Defense] AI Enters the Battlefield, Reshaping the Axis of Military Competition
Input
Modified
AI analysis and command systems breach Iran’s air defenses
Expansion of defense markets draws more civilian AI firms
Technology choices influence intelligence analysis and operational efficiency

As indications emerge that the United States used artificial intelligence (AI) in military operations against Iran, competition surrounding AI technology has also entered a new phase. The use of Anthropic’s AI model Claude, the company’s conflict with the government, and discussions about shifting to OpenAI technology have highlighted the choice of AI model used in warfare as a major variable. At the same time, the number of cases in which civilian AI companies participate in defense projects is increasing, intensifying both technological competition and policy disputes.
Used in Critical Stages Such as Target Selection
According to Bloomberg, the U.S. Central Command (CENTCOM) recently used Anthropic’s generative AI model Claude during airstrike operations against Iran for intelligence assessment, target identification, and combat scenario simulation. Anthropic had previously integrated Claude into the reasoning engine of a military decision-making platform through cooperation with data analytics company Palantir in November last year (reference article timing). In January this year (reference article timing), the company also proposed a contract to build a system capable of autonomously controlling swarms of hundreds of drones by converting commanders’ tactical orders into digital signals. These moves by Anthropic were widely viewed as a clear example of how rapidly technology has penetrated the conduct of warfare.
In the latest airstrike operation, AI’s role was assessed as extending well beyond simple analytical support. In battlefield environments where the U.S. military must simultaneously process massive volumes of information—including satellite imagery, drone reconnaissance footage, and intercepted communications—AI-driven analysis systems were used to identify high-value targets and design the sequence of attacks accordingly. During this process, AI analyzed vulnerabilities in enemy air-defense networks to determine focal points for concentrated strikes, significantly reducing the time required to formulate operational plans. While the system maintained a structure in which human commanders made final decisions, a large portion of the data analysis and operational design process was handled by algorithms, demonstrating the practical emergence of an algorithm-centered analytical framework.
The scope of AI’s use on the battlefield is expanding rapidly. During the first 24 hours after the launch of the air campaign, U.S. forces reportedly struck more than 1,000 Iranian targets, with AI-based data analysis systems playing a central role in the operation. AI integrated and analyzed satellite imagery, drone reconnaissance data, and electronic intelligence gathered by the U.S. military to establish target priorities and assess attack feasibility. This approach reflects the characteristics of “data-centric warfare,” in which targets are selected and attack plans formulated based on large-scale data analysis. Analysts say this shift illustrates how the decisive factors in warfare are moving away from traditional firepower-focused strategies toward conflicts where information analysis and algorithmic processing capabilities determine victory or defeat.
The United States has previously used AI technology in other military operations. In January (reference article timing), an analysis system based on Claude was used during an operation to capture Venezuelan President Nicolás Maduro, tracking the target’s movement patterns and constructing operational scenarios. This trend, however, has also generated conflict between technology companies and the government. Anthropic repeatedly expressed opposition, arguing that the U.S. Department of Defense used its AI in military operations without consent. In response, the Trump administration on the 4th designated Anthropic a “supply-chain threat company” and banned the use of Claude by federal agencies and government institutions.
Competition for Military Project Contracts Intensifies
Despite such disputes, the integration of the defense industry and the AI sector is advancing rapidly as generative AI technology begins to be deployed in real military operations. In particular, the expansion of AI-based intelligence analysis and operational support systems led by the U.S. Department of Defense and intelligence agencies has triggered a growing number of cases in which private technology companies participate in military projects. One industry official said, “As AI emerges as a core industry directly tied to national security rather than simply technological competition, cooperation between global AI companies and governments around the world is expanding rapidly.”
A representative example is the partnership between OpenAI and the U.S. Department of Defense. According to The Wall Street Journal, OpenAI recently signed a classified contract with the Pentagon and launched a project to introduce its AI models into military networks. Under the agreement, OpenAI’s models will be used in systems designed for intelligence analysis and decision-support functions. In a meeting with employees, OpenAI Chief Executive Officer Sam Altman said, “Individuals may hold differing opinions about the airstrikes on Iran, but the company is not in a position to judge such matters,” emphasizing that “our focus is on providing technical advice and building safety frameworks.”
OpenAI is also pursuing plans to expand the scope of its defense cooperation further. Altman told employees that the company is reviewing a contract to deploy its AI models on NATO’s classified networks. This plan triggered internal debate within OpenAI. Some employees openly criticized the Department of Defense contract, and AI safety researchers reportedly expressed concerns about the potential military use of the technology. Nevertheless, efforts to build systems capable of comprehensively processing vast amounts of data—including satellite imagery, intercepted communications, and drone reconnaissance information—are proceeding without disruption. Such developments are expected to accelerate the spread of AI technology across the broader Western military alliance.

Rising Importance of Data-Processing Capability
Anthropic had also maintained relatively cooperative relations with the government and participated in military projects until shortly before the recent conflict. The dispute emerged during negotiations over the scope of Claude’s military use. In a statement, Anthropic CEO Dario Amodei said the Pentagon refused to accept two key exception clauses proposed during negotiations. The company had requested exemptions prohibiting the use of its AI in “large-scale domestic surveillance” and “fully autonomous lethal weapons.” Amodei explained that large-scale domestic surveillance is incompatible with democratic values and that AI models should not be deployed in autonomous weapons systems capable of identifying and attacking targets without human involvement because the technology has not yet achieved sufficient reliability.
The U.S. Department of Defense rejected those demands, arguing that they would impose unnecessary constraints on military operations. Pentagon spokesperson Sean Parnell dismissed the claims, stating that “the U.S. military has no interest in illegal domestic surveillance or the development of autonomous weapons without human involvement,” while strongly criticizing Anthropic for attempting to limit the military’s operational authority. Emil Michael, the undersecretary of defense, also condemned the company’s position, saying, “We cannot ask for the CEO’s permission when trying to shoot down swarms of enemy drones attempting to kill Americans.” Military officials maintain that they cannot accept a situation in which technology suppliers impose policy conditions on operational decision-making in combat environments.
The dispute ultimately escalated into administrative action, and Anthropic’s role was quickly replaced by competitor OpenAI. U.S. Secretary of Defense Pete Hegseth said that the responsibilities previously carried out by Anthropic would be transferred over a transition period of approximately six months to what he described as a “more patriotic provider.” Some members of Congress expressed concern that “a government forcing the deployment of AI weapons without safety safeguards is a truly frightening prospect,” but President Donald Trump dismissed those criticisms, stating that “Anthropic made the destructive mistake of trying to impose its terms of service instead of the Constitution.”
This sequence of developments suggests that the central arena of competition in modern warfare is shifting from traditional weapons systems to algorithms and data-processing capability. The trend can already be observed in the war in Ukraine, where autonomous drones have demonstrated significant destructive potential. Ukrainian drones are equipped with “Seeker Scout” technology that allows AI to detect and attack targets autonomously when communication signals are lost during electronic-warfare conditions. Russia’s Lancet-3 drone is also known to possess an AI-based target-tracking function during its terminal phase.
The difficulty lies in predicting how destructive AI-driven warfare could become. A recent study by a research team led by Professor Kenneth Payne at King’s College London conducted war simulations using AI models including ChatGPT-5.2, Claude Sonnet 4, and Gemini 3 Flash. The results indicated that nuclear weapons were chosen in 95 percent of simulated scenarios. Payne said, “The use of nuclear weapons occurred in almost every scenario,” adding that while AI can assist with decisions such as evaluating risks under different circumstances, “the stage where nuclear launch codes are entrusted to AI has not yet arrived.”
Comment