Dell Technologies CEO Michael Dell recently made a powerful statement, projecting that total AI memory demand is set to jump approximately 625 times.
This isn't just a random large number; it's based on a clear calculation. Dell explained that the memory capacity per AI accelerator is expected to increase from 80GB (like in NVIDIA's H100) to 1-2TB in the near future—a roughly 25-fold increase. He then multiplied this by another 25-fold increase in the scale of accelerators deployed in data centers. The result, 25 times 25, is 625. This reframes the entire AI infrastructure conversation away from the familiar 'GPU shortage' and toward a new reality: the 'memory-intensive' era.
So, what makes this bold prediction credible? A look back at market signals over the past eight months reveals a clear and logical causal chain.
First, we've seen unprecedented price signals. In early 2026, TrendForce reported a historic 90-95% quarterly price hike for conventional DRAM, followed by projections of another 58-75% jump in the second quarter. This scarcity forced customers to shift their thinking from 'if we should buy' to 'when can we buy,' leading to long-term contracts and advance purchases to secure supply.
Second, this intense demand prompted massive supplier investments. SK Hynix announced a huge $7.9 billion deal for advanced EUV equipment to boost HBM and DRAM production. Meanwhile, Micron officially started mass production of HBM4 for NVIDIA's next-gen chips, stating that demand would exceed supply for a significant period. These moves signal that while capacity is expanding, easing the bottleneck will take time.
Third, the demand side of the equation has exploded. Hyperscalers like Alphabet and Amazon announced record-breaking capital expenditure plans for 2026, nearly doubling their previous year's spending. On top of this, a new demand driver has emerged: 'Sovereign AI,' where entire nations are building their own large-scale AI infrastructure, further solidifying the demand floor.
Ultimately, these events have created a powerful feedback loop. Price hikes confirmed the supply shortage, which drove customers to secure long-term deals. This, in turn, fueled massive investments from both suppliers and buyers, cementing memory's new role. It is no longer just another component; analysts now estimate memory will account for nearly 30% of a hyperscaler's AI data center costs. Michael Dell's 625x figure is simply the numerical conclusion to this unfolding industry-wide narrative.
- HBM (High Bandwidth Memory): A type of high-performance memory used alongside GPUs to quickly process large amounts of data, essential for AI computations.
- Hyperscaler: A large-scale cloud service provider that operates massive data centers, such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure.
- Sovereign AI: A national strategy to build and control a country's own AI infrastructure, including supercomputers and large language models, to ensure digital autonomy and security.
