Broadcom has confidently projected its AI chip revenue will surpass $100 billion in 2027, signaling a massive shift in the semiconductor landscape.
This isn't just wishful thinking; it's a forecast built on a solid foundation of large-scale partnerships. The key drivers are major AI players moving towards custom silicon. First, AI research firm Anthropic is expected to drive about 3 gigawatts (GW) of AI compute demand by 2027. To put that in perspective, 3 GW is enough to power roughly 30 massive data centers. Second, OpenAI is set to deploy its first custom AI chips, co-developed with Broadcom, at scale in 2027. Finally, Meta's own custom chip program, MTIA, remains a key priority, complementing its large-scale GPU purchases from other vendors.
This trend towards custom chips is made possible by crucial developments in the supply chain. For a long time, the biggest hurdles were the availability of high-bandwidth memory (HBM) and advanced packaging services like TSMC's CoWoS. However, major memory makers like Samsung and SK hynix are ramping up HBM4 production, and TSMC is nearly tripling its advanced packaging capacity. These expansions are critical, as they unlock the ability to produce the powerful, large-scale custom chips that companies like OpenAI and Google need.
Broadcom's strategy is strengthened by its diversified customer base. By working with Google on its TPUs, as well as with Anthropic, OpenAI, and Meta, the company isn't reliant on a single customer's success. This multi-customer pipeline, combined with its leadership in high-speed networking technology like Ethernet, creates a powerful narrative. The story is no longer just about NVIDIA's GPUs; it’s about a broader ecosystem where custom solutions and high-performance networking play a central role. The focus now shifts from possibility to execution.
- Custom Accelerator: A specialized chip (also called an XPU) designed to perform a specific task, like AI model training, much more efficiently than a general-purpose processor (CPU) or even a GPU.
- HBM (High-Bandwidth Memory): A type of high-performance RAM that stacks memory chips vertically to provide much faster data transfer speeds, which is essential for powerful AI accelerators.
- PUE (Power Usage Effectiveness): A metric used to determine the energy efficiency of a data center. It's calculated by dividing the total power entering the data center by the power used by the IT equipment.