The U.S. military continued using a powerful AI named Claude in a major military operation, just hours after the President officially banned it from all government use.
This surprising event stems from a fundamental conflict between government policy and battlefield reality. The Trump administration designated the AI's creator, Anthropic, a 'supply-chain risk' and ordered a stop to its use. The core issue was control. The government demanded that AI tools be available for 'all lawful uses' without the safety restrictions, or 'guardrails,' that Anthropic had built in to prevent misuse, such as in autonomous weapons systems. When Anthropic held its ground, the government effectively blacklisted them.
However, policy changes in Washington don't instantly translate to changes in a combat zone. For months, Claude had been the only advanced AI model approved and deeply integrated into the Pentagon's most sensitive, classified networks. It was used for critical tasks like analyzing intelligence, prioritizing targets, and simulating scenarios, often through platforms like Palantir. During 'Operation Epic Fury,' a large-scale, time-sensitive strike against Iran, commanders couldn't simply unplug a tool so vital to their workflow. The ban included a six-month phase-out period for the Pentagon, tacitly acknowledging this very dependency.
This situation didn't develop overnight. Its roots trace back to late 2024, when a partnership first brought Claude into classified environments. Over the following year, a series of high-value government contracts and wider federal adoption made military systems increasingly reliant on it. This created a strong path dependency, where the cost and complexity of switching to a new system became immense, especially in a crisis.
Ultimately, the use of Claude after the ban highlights a crucial dilemma in modern warfare. While governments seek to enforce rules and maintain control over powerful technologies, military commanders on the ground are driven by the necessity of using the best-integrated and most effective tools at their disposal. This event serves as a stark example of the friction between policy ideals and operational imperatives.
- Glossary -
- Supply-chain risk: A formal designation by the government that effectively blacklists a company from federal contracts, deeming it a security threat.
- Guardrails: Safety features and ethical restrictions built into an AI by its developers to prevent it from being used for harmful or unintended purposes.
- Path dependency: A situation where past decisions and technological integrations make it difficult or costly to switch to alternatives, even if better options become available.