Anthropic, the company behind the AI assistant Claude, has announced a major partnership with Google and Broadcom to secure a massive amount of next-generation computing power.
This deal is all about preparing for the future. To train and run powerful AI models like Claude, you need immense computational resources, often called 'compute'. Anthropic is locking in a multi-gigawatt supply of Google's specialized AI chips, known as Tensor Processing Units (TPUs), which are set to come online starting in 2027. This move is crucial for two main reasons.
First, it's about cost and efficiency. While many AI models run on general-purpose GPUs from companies like NVIDIA, TPUs are custom-built by Google specifically for AI tasks. This specialization can make them significantly more cost-effective. Some analyses suggest that at a large scale, TPUs could reduce computing costs by as much as 30-40% compared to equivalent GPUs. For a company operating at Anthropic's scale, this translates into enormous savings.
Second, this investment is a direct response to soaring demand. Anthropic revealed its annual run-rate revenue has skyrocketed past $30 billion, a dramatic jump from around $9 billion at the end of 2025. With over 1,000 enterprise customers each spending more than $1 million annually, the company needs to ensure it has the capacity to serve them. This isn't a speculative bet on future growth; it's a necessary step to keep up with existing demand.
Furthermore, Anthropic is playing a smart strategic game. The company isn't putting all its eggs in one basket. It follows a 'multi-cloud' and multi-hardware strategy, using not only Google's TPUs but also AWS's Trainium chips and NVIDIA's GPUs. Its services are available on all three major cloud platforms: AWS, Google Cloud, and Microsoft Azure. This diversification minimizes dependence on any single provider, which is a crucial way to manage risk and navigate potential regulatory scrutiny.
Finally, the plan to build most of this new capacity in the U.S. aligns with Anthropic's previous commitment to invest $50 billion in American AI infrastructure. This move connects with national industrial policy but also faces practical hurdles like securing power grid connections, which explains the company's recent steps to increase its political and policy engagement.
- TPU (Tensor Processing Unit): A custom-designed computer chip created by Google specifically for accelerating artificial intelligence and machine learning tasks.
- Run-Rate Revenue: A projection of a company's future annual revenue based on its current earnings. It's calculated by taking recent revenue (like a month or quarter) and extrapolating it over a full year.
- Multi-Cloud: A strategy where a company uses services from multiple cloud computing providers (like AWS, Google Cloud, and Microsoft Azure) to avoid depending on a single vendor and to optimize for cost and performance.
