AMD CEO Lisa Su's visit to Korea is a pivotal strategic move to secure the supply chain for its next-generation AI accelerators.
The explosive growth of AI has turned High-Bandwidth Memory (HBM) into the most critical and scarce component for building powerful AI systems. Korea, home to Samsung and SK hynix, has become the undisputed global hub for HBM manufacturing, especially for the next-generation HBM4. This concentration of production means that any company serious about AI—including AMD—must forge strong ties in Korea to secure its supply.
Several recent developments have made this visit particularly urgent. First, AMD's primary competitor, NVIDIA, is reportedly securing HBM4 capacity from both Samsung and SK hynix for its upcoming 'Rubin' platform. News in early 2026 about Samsung beginning mass production and shipments to NVIDIA sent a clear signal: the race for next-gen memory is on, and AMD cannot afford to be left behind.
Second, AMD has its own massive commitments to fulfill. In late 2025, AMD announced a landmark partnership to supply 6 gigawatts of its Instinct GPUs to OpenAI. This enormous deal, which begins delivery in late 2026, is entirely dependent on a guaranteed, large-scale supply of HBM. Without it, AMD's biggest AI win to date would be at risk.
Finally, Korea's national strategy creates a unique opportunity. The government's push for 'Sovereign AI'—building domestic AI infrastructure to reduce reliance on single foreign vendors—incentivizes companies like NAVER to diversify their suppliers. This opens the door for AMD to position itself as a viable and powerful alternative to NVIDIA, offering both supply resilience and competitive leverage.
Therefore, Dr. Su's meetings with both Samsung, the component supplier, and NAVER, a potential major customer, represent a comprehensive strategy. By securing the essential memory and courting a key end-user, AMD aims to solidify its entire value chain. This trip is about ensuring AMD can turn its ambitious AI roadmap and major deals into real-world deployments, cementing its position as the clear number two in the AI hardware race.
- HBM (High-Bandwidth Memory): A type of high-performance computer memory used with GPUs to quickly process the vast amounts of data required for AI applications.
- Sovereign AI: A national initiative to develop and control a country's own AI infrastructure and capabilities, ensuring technological independence and security.
- AI Accelerator: Specialized hardware, such as a GPU, designed to significantly speed up AI and machine learning computations.
