AWS CEO Matt Garman recently confirmed that all of Anthropic's latest AI models are trained on AWS's custom Trainium chips.
This statement is a major milestone in AWS's long-term strategy to challenge Nvidia's dominance in the AI chip market. By showcasing that a leading AI company like Anthropic relies on its custom silicon for cutting-edge model training, AWS is sending a powerful message: there is a credible, high-performance alternative to Nvidia's GPUs.
The context behind this announcement is crucial, and it didn't happen overnight. First, the groundwork was laid by Amazon's multi-billion dollar investment in Anthropic, which established AWS as its primary cloud and training partner. This strategic alignment made Trainium the natural choice for developing new Claude models.
Second, competitive pressure played a significant role. When Anthropic announced a major deal to use Google's TPUs in late 2025, it created uncertainty. This motivated AWS to clarify its position and assert that the most advanced, "frontier" model training for Claude was still happening on its hardware. Today's statement directly addresses that narrative.
Finally, this claim is backed by immense investment and scale. AWS has been public about building massive AI infrastructure like 'Project Rainier,' a super-cluster with over a million Trainium chips dedicated to Anthropic. Coupled with reports that its custom silicon business has surpassed a $10 billion annual run-rate, the CEO's words serve as a public validation of a well-established operational reality.
From a financial perspective, this solidifies a potentially massive revenue stream. Analysts estimate that securing Anthropic's training workloads could add up to $6 billion in incremental revenue for AWS in 2026 alone. Strategically, it also shows Anthropic is skillfully diversifying its hardware suppliers (with Google) while maintaining a primary relationship with AWS, a common de-risking tactic in the tech world.
- Trainium: AWS's custom-designed chip specifically for training large language models, created as an alternative to Nvidia's GPUs.
- TPU (Tensor Processing Unit): Google's custom-designed chip optimized for AI and machine learning workloads, competing with both Nvidia's GPUs and AWS's Trainium.
- Run-rate: A projection of future financial performance based on current data. An annual run-rate extrapolates current revenue (e.g., from one quarter) to estimate what it would be for a full year.
