OpenAI's COO Brad Lightcap recently made a crucial statement: the biggest obstacle holding back AI right now isn't power, but memory. This changes how we should think about the AI industry's immediate future.
So, what's behind this shift? It's a classic case of demand massively outstripping supply. First, the demand for High-Bandwidth Memory (HBM)—a specialized, super-fast memory essential for AI processors—has exploded. Companies are racing to build more powerful AI models, and these models are incredibly hungry for memory. Second, the supply side can't keep up. Leading memory makers like Micron and SK hynix have already sold out their entire HBM production for 2026. SK Group's chairman even warned that memory shortages could last until 2030 due to fundamental constraints in manufacturing capacity. Third, there's the advanced packaging bottleneck. HBM chips need to be intricately connected to the AI processor using a technology like CoWoS. While capacity for this is expanding, it's still a chokepoint that limits how many finished AI accelerators can be produced. The "HBM + packaging" combination is the real ceiling on AI hardware output today.
You might be wondering what happened to the power shortage narrative. It hasn't disappeared, but it has evolved. The massive electricity demand from data centers is real, and utilities are seeing the fastest growth in decades. However, the problem is shifting from an absolute scarcity of power to a challenge of logistics and management. Companies like Dominion Energy are creating new queueing systems to manage new data center connections. Furthermore, recent trials backed by NVIDIA have shown that AI data centers can adjust their power consumption in near-real-time to avoid overloading the grid. This makes the power issue a more manageable, long-term infrastructure project rather than an immediate, show-stopping crisis.
In essence, the AI industry's most urgent problem has moved from securing enough electricity to securing enough advanced memory chips. This pivot, highlighted by OpenAI's leadership, directs our attention to the semiconductor supply chain—specifically HBM production and packaging capacity—as the key factor determining the pace of AI development in the near term.
- HBM (High-Bandwidth Memory): A type of high-performance RAM used alongside GPUs and other AI accelerators. It stacks memory chips vertically to achieve much higher bandwidth than conventional memory, which is crucial for training large AI models.
- Advanced Packaging (CoWoS): Stands for Chip-on-Wafer-on-Substrate. It's a technology that allows multiple chips, like processors and HBM, to be integrated into a single package, enabling faster communication between them.
- Bottleneck: A point of congestion in a system that limits its overall performance or capacity. In this context, it's the specific component (memory) that is in the shortest supply and thus restricts the growth of the entire AI industry.
