Samsung Electro-Mechanics has secured a pivotal role in NVIDIA's next-generation AI ecosystem.
The company was named the primary supplier for the FC-BGA substrate for the 'Groq 3 LPU', with mass production set to begin in the second quarter of 2026. This isn't just any component supply deal; it's significant because the LPU is a core part of NVIDIA's new 'Vera Rubin' platform, marking a major step forward for the company.
There's a clear causal chain behind this achievement. First, NVIDIA announced at its GTC 2026 conference that it would integrate Groq's LPU, an accelerator specializing in AI inference, into its Rubin architecture. This officially created the demand for the component.
Second, reports followed that Samsung Foundry would manufacture this powerful Groq 3 LPU chip using its advanced 4nm process. This solidified the chip's production roadmap, making the demand for its components tangible.
Finally, with the chip's production secured, the need for a high-performance substrate became critical, and Samsung Electro-Mechanics stepped in to win the primary supplier spot. This not only creates powerful synergy within the Samsung group but also cements its relationship with NVIDIA, a key player in the AI industry.
This represents a major strategic shift in the market. For years, the supply of high-end substrates for NVIDIA products was dominated by Japanese firms like Ibiden. Samsung Electro-Mechanics' win, following its earlier entry supplying substrates for the NVSwitch, signals a crack in this long-standing monopoly and the beginning of a serious expansion of its market share.
The market has reacted with enthusiasm. The company's stock price has surged over 90% since the start of the year, pushing its valuation to a historical peak. This suggests that investors are already pricing in a future where Samsung Electro-Mechanics benefits from higher profitability and stronger negotiating power amid a structural shortage of high-end AI server substrates.
- FC-BGA (Flip Chip Ball Grid Array): A type of high-density substrate used to connect a semiconductor chip to a circuit board, essential for high-performance processors like GPUs and LPUs.
- LPU (Language Processing Unit): A specialized processor developed by Groq, designed to accelerate AI inference tasks, particularly for large language models, with very low latency.
- AI Inference: The process of using a trained AI model to make predictions or decisions based on new, real-world data. It's the 'live' operational phase after the initial 'training' phase.
