The U.S. Department of the Treasury has officially stopped using all AI products from the company Anthropic, including its well-known model, Claude.
This decision wasn't made in a vacuum; it was the final step in a rapid chain of events driven by a fundamental conflict between a tech company's ethical principles and the U.S. government's national security demands. This move signals a major shift in how the government will oversee the AI tools it uses.
The story begins with a direct clash. First, on February 26, 2026, Anthropic publicly refused a Pentagon demand to remove its safety guardrails. These are ethical restrictions that prevent its AI from being used for controversial applications like fully autonomous weapons or mass domestic surveillance. The company stood by its acceptable-use policy.
Second, the government's reaction was swift and decisive. The very next day, the Department of Defense officially designated Anthropic a 'supply-chain risk,' effectively treating the company as a national security liability. This was a powerful label that set the stage for a broader government response.
Third, following the Pentagon's lead, the White House issued an executive order compelling all federal agencies to cease using Anthropic's technology. To enforce this, the General Services Administration (GSA) removed Claude from federal procurement platforms, shutting down the official channels agencies like the Treasury use to acquire and manage software.
It’s important to understand that the government was prepared for this. Existing policies, like the OMB's M-25-21 memo and Treasury's own new AI Risk Management Framework, provided the legal and procedural playbook to terminate a non-compliant AI service. The issue was never about technical security—Claude had already achieved the high-level FedRAMP security authorization. The conflict was purely about who dictates the terms of use.
This event shows that when a private company's terms of service conflict with perceived national security needs, the government is prepared to assert its authority. It sets a significant precedent for the entire AI industry, especially for companies that wish to work with public sector clients.
- Guardrails: Safety restrictions built into AI models to prevent them from being used for harmful purposes, such as generating violent content or assisting in creating weapons.
- Supply-chain risk: A designation given to a supplier or vendor when they are considered a potential threat to national security or government operations.
- FedRAMP: A U.S. government-wide program that provides a standardized approach to security assessment and authorization for cloud products and services.