Bank of America recently raised its 2026 shipment forecast for Google's Tensor Processing Units (TPUs), signaling a major acceleration in the AI hardware race.
The core driver behind this optimism is the staggering increase in capital spending planned by cloud giants for 2026. Google's parent company, Alphabet, has guided for a near 97% year-over-year jump in capital expenditures to around $180 billion, while Amazon plans to invest about $200 billion. These massive budgets are aimed squarely at building out AI infrastructure, creating a powerful demand signal for custom silicon.
This leads to a clear causal chain. First, this 'capex shock' directly translates into massive orders for custom AI chips like Google's TPU and Amazon's competing Trainium chip. Second, market dynamics are shifting in their favor. Meta's recent multiyear deal to buy millions of Nvidia chips, following setbacks in its own chip development, reinforces the industry's reliance on proven, external accelerators. This intensifies the demand pressure on the entire high-end semiconductor ecosystem. Third, this concentrated demand creates a critical bottleneck. The entire AI chip boom hinges on Taiwan's advanced semiconductor supply chain, particularly TSMC's advanced packaging technology known as 'CoWoS' and the specialized testing capacity needed for these complex chips.
Because of this bottleneck, companies that control these scarce resources are positioned to benefit significantly. This is precisely why BofA's report highlights seven specific Taiwanese firms. The list includes not only the foundry giant TSMC, but also companies specializing in outsourced assembly and testing (OSAT) like ASE Tech and KYEC, and equipment makers like Chroma ATE, all of whom are essential to bringing these AI chips to life.
In essence, BofA's forecast is more than just a number. It reflects a powerful narrative where the immense ambition of hyperscalers meets the physical limits of the supply chain, placing Taiwan squarely at the center of this multi-billion dollar AI buildout.
- TPU (Tensor Processing Unit): Google's custom-designed AI accelerator chip, optimized for machine learning workloads.
- Hyperscaler: A massive cloud services provider like Google (GCP), Amazon (AWS), or Microsoft (Azure) that operates data centers at a global scale.
- CoWoS (Chip-on-Wafer-on-Substrate): An advanced 2.5D packaging technology by TSMC, essential for integrating multiple chips into a single powerful processor for AI and high-performance computing.