Reports suggest NVIDIA has chosen Samsung and SK Hynix as key HBM4 suppliers for its next-generation 'Vera Rubin' platform, potentially sidelining Micron in the initial rollout.
This isn't just about memory chips; it's about a critical component powering the future of AI. The Vera Rubin platform aims to drastically cut AI training and inference costs by using massive amounts of ultra-fast HBM4 memory. To achieve this, NVIDIA needs partners who can reliably produce these advanced chips at high volume and quality, making the supplier choice a pivotal decision for the entire industry.
This story gained momentum due to a chain of recent events. First, NVIDIA itself confirmed it began shipping Rubin samples to customers and set a firm reveal date at its GTC 2026 conference. This signaled that key technical specifications, including the memory, were being locked in. Second, just before this, reports emerged that SK Hynix had already secured about two-thirds of NVIDIA's initial HBM4 orders, strongly hinting at the supply structure.
This outcome was built over several months. Samsung regained NVIDIA's confidence by passing crucial quality tests for its 12-Hi HBM3E memory. Meanwhile, SK Hynix demonstrated its technical lead by announcing the completion of its HBM4 development, meeting NVIDIA's demanding performance targets. A favorable policy decision by the U.S. government, which granted licenses for the Korean firms' fabs in China, also helped by reducing supply chain uncertainty.
So, where does this leave Micron? While Micron successfully entered NVIDIA's supply chain with HBM3E, the market consensus is that it may be a step behind for the cutting-edge HBM4 needed for the top-tier Rubin GPUs. However, this isn't official. Micron could still secure a role in later product waves or for different Rubin SKUs, such as mid-range versions.
Ultimately, this situation highlights that the 2026-2027 AI race will be won by those who master HBM4 technology. The ability to deliver on performance, yield, and schedule has become the ticket to the top tier, reinforcing the market dominance of Korea's memory giants.
- Glossary
- HBM4 (High Bandwidth Memory 4): The fourth generation of high-performance stacked memory, offering significantly increased bandwidth and capacity, crucial for AI accelerators.
- Vera Rubin Platform: NVIDIA's next-generation AI accelerator platform, expected to succeed the 'Blackwell' architecture, combining a new GPU ('Rubin') and CPU ('Vera').
- SKU (Stock Keeping Unit): In this context, it refers to a specific version or variant of a product, such as a high-end, mid-range, or entry-level GPU.
