SK Group's chairman Chey Tae-won meeting with Nvidia CEO Jensen Huang is far more than a simple handshake; it's a crucial negotiation to secure the memory supply for the next generation of AI.
The primary driver is Nvidia's relentless product cycle. Its next-generation AI platform, codenamed 'Rubin,' is slated for a 2026 launch. These powerful chips are incredibly hungry for data and require a special type of ultra-fast memory called HBM4 (High Bandwidth Memory). As the current leader in HBM technology, SK hynix is positioning itself to be the main supplier for this massive demand.
However, SK hynix isn't alone. Competitors, especially Samsung, are closing the gap. Samsung recently passed Nvidia’s tough quality tests for its latest memory, signaling that Nvidia has other strong options. This competitive pressure turns the meeting from a friendly chat into a high-stakes bid for SK hynix to lock in its market share for the crucial HBM4 generation.
A major complication is the manufacturing process. It's not enough to produce HBM chips; they must be integrated with Nvidia's GPUs using an advanced packaging technology called CoWoS. The world's leading provider of this service, TSMC, has extremely limited capacity, and Nvidia has already booked most of it. This means any memory supply deal is worthless without a guaranteed packaging slot, forcing suppliers like SK hynix to align their plans with Nvidia's far in advance.
Beyond the components, there's a larger strategic play. SK hynix is building a new, multi-billion dollar advanced packaging facility in Indiana, USA, supported by the U.S. CHIPS Act. This move helps Nvidia de-risk its supply chain by having a key partner on American soil. It also opens the door for discussions to expand beyond memory into broader AI infrastructure, including data centers and energy solutions.
So, this meeting is a pivotal moment. It's about securing a multi-billion dollar deal that will define the supply chain for the next era of AI, navigating fierce competition, technological bottlenecks, and global strategy.
- HBM (High Bandwidth Memory): An advanced, high-speed computer memory used for high-performance computing and AI chips. It stacks memory chips vertically to increase speed and reduce power consumption.
- CoWoS (Chip on Wafer on Substrate): An advanced packaging technology that allows multiple chips, like a GPU and HBM, to be integrated closely together on a single base, boosting performance.
- Rubin: The codename for Nvidia's next-generation AI GPU platform, expected to be released in 2026 and requiring HBM4 memory.
