U.S. Defense Department Clashes With Anthropic Over ‘AI First’ Doctrine, Ethical Dispute Intensifies Over Autonomous Weaponization
Input
Modified
Stalemate in Negotiations Over $200 Million AI Contract With Anthropic Prolonged Dispute Over Application of Claude Safeguards China’s Advanced Unmanned Capabilities Emerge as a Key Security Variable

The U.S. Department of Defense, which has pursued unmanned and automated military capabilities under its declared “AI First” doctrine, is now facing open conflict with artificial intelligence developers calling for ethical oversight. As negotiations have stalled over disagreements regarding the scope and manner of AI model deployment, the possibility of contract termination has surfaced. Experts view the clash as a critical test case for defining standards of use and accountability in AI-enabled military power. Coupled with advances in autonomous unmanned forces by rivals such as Russia and China, the dispute is expected to carry significant implications for the global security environment.
Anthropic Opposes Lethal Weapon Use of Claude
According to Reuters and The Wall Street Journal on the 2nd, negotiations between the U.S. Department of Defense and AI developer Anthropic over a $200 million contract have effectively reached an impasse. The WSJ reported that “while Anthropic insists on strict adherence to safeguards embedded in its Claude AI model, defense authorities argue that operational flexibility is essential in wartime and security environments,” adding that “as disagreements over the scope and conditions of Claude’s use have dragged on, a major contract signed amid the government’s push to expand AI adoption has been put at risk.” Sources familiar with the matter said the possibility of cancellation cannot be ruled out.
The contract was signed last July as part of the U.S. government’s broader AI First strategy, aimed at integrating the Claude model into defense operations. Shortly thereafter, Anthropic’s terms of service emerged as a focal point of contention. In particular, provisions prohibiting use for “domestic surveillance” raised concerns that they could restrict deployment by law enforcement agencies such as Immigration and Customs Enforcement and the Federal Bureau of Investigation. Anthropic’s longstanding opposition to the use of its technology in autonomous lethal operations further exacerbated tensions. Some administration officials have reportedly complained that Anthropic is imposing excessive constraints even on legally sanctioned applications.
The dispute has spilled into the public arena. Dario Amodei, Anthropic’s chief executive officer, who has emphasized “responsible AI” since the company’s inception, recently published an essay warning against large-scale surveillance and fully autonomous weaponization enabled by AI. He cautioned that deploying AI beyond human control in military and security contexts could undermine democratic oversight and accountability, drawing a clear line against a technology-first approach devoid of ethical considerations. By contrast, U.S. Secretary of Defense Pete Hegseth struck a hardline tone when announcing cooperation with Elon Musk’s AI startup xAI, stating, “We will not use AI models that refuse to support warfighting,” a remark widely interpreted as directed at Anthropic.
U.S. Defense Department Reshapes Wartime Structure Around Advanced AI
The conflict reflects the Trump administration’s broader push for radical defense reform. Last month, the Department of Defense unveiled a sweeping innovation plan designating AI as the primary engine of military power, pursuant to Executive Order 14179 issued by President Donald Trump. The initiative seeks to overhaul a bureaucracy-bound defense system into one effectively optimized for an AI-era wartime posture. Speed and efficiency form the core philosophy. Under the plan, when private companies such as OpenAI, Google, or Anthropic release new AI models, the Pentagon will review and deploy them within its internal networks within 30 days, upending a weapons acquisition process that previously took months or years. The strategy explicitly aims to eliminate reliance on outdated models.
To this end, the Pentagon has launched seven flagship projects. In combat operations, AI will be used to design and simulate new tactics. In intelligence, plans are underway to compress the timeline from data collection to weaponization from months to mere hours. Additionally, access to AI models will be granted to roughly three million service members to elevate AI utilization across the organization. In the process, the role of ethical standards has shifted. The Department of Defense has reframed the concept of “responsible AI” through the lens of “cold realism,” broadening permissible use cases as long as they serve lawful objectives. Critics argue this effectively leaves the door open to the deployment of AI-enabled autonomous lethal weapons with minimal human intervention.
This trajectory did not emerge overnight. Since the start of its second term, the Trump administration has repeatedly signaled through strategic documents and public statements that national security and strategic advantage take precedence over ethical constraints in AI deployment. The Pentagon, in line with this policy direction, regards AI as a core instrument for maintaining security dominance and widening the military gap with competitors such as Russia and China. This stance is also evident in Washington’s distance from international ethical AI initiatives. Notably, the United States declined to join the “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet” led

AI Takes Command of the Battlefield in the Russia–Ukraine War
Analysts say the standoff between Anthropic and the U.S. Department of Defense represents more than a contractual dispute, marking a pivotal moment in defining the trajectory and limits of military AI. AI has already moved beyond a supporting role on the battlefield to one that directly shapes the speed and direction of combat. As its effectiveness and risks escalate simultaneously, concerns are growing that malfunctions without human oversight could lead to immediate large-scale casualties. As a result, the central question for the industry is no longer efficiency alone, but who controls AI systems and to what extent.
These debates have played out in real time on the Russia–Ukraine battlefield. Ukrainian forces have used U.S.-supplied AI surveillance and analysis systems to track Russian troop movements and artillery positions in real time, rapidly determining target priorities and strike methods. Another notable example is the remotely operated Shablya machine-gun turret discovered at Russian-held positions. Capable of automatically tracking, aiming, and firing at targets, the system operates day and night using thermal sensors and wide-angle cameras. In effect, AI-integrated weapons have come to dominate the battlefield, relegating human soldiers to a secondary role.
Against this backdrop, China’s rapid advancement in AI-driven military capabilities has emerged as a primary concern for Washington. According to the WSJ, China is developing military power centered on drone swarms, robotic forces, and autonomous unmanned systems designed to control the battlefield with minimal human intervention. As machines take on decision-making, targeting, and maneuvering, traditional human-centric command structures face inevitable transformation. Within China, however, there is also growing recognition of the inherent risks. Critics warn that autonomous weapons deployed without sufficient human control could misjudge situations, while opaque AI decision-making processes could become black boxes that obscure responsibility.