NVIDIA's upcoming GTC 2026 conference is poised to be a major event where the company will preview a chip for 2028, codenamed 'Feynman'.
This announcement is about more than just a future product; it's a strategic move to set a long-term narrative. By giving a glimpse of its 2028 vision, NVIDIA reassures investors, customers, and suppliers that its leadership in AI hardware is not just for today but is architected for years to come. This helps lock in commitment from hyperscalers and sovereign AI initiatives who need to plan their infrastructure well in advance.
This long-term vision is built on a clear causal chain. First, the technological foundation is TSMC's next-generation 'A16' 1.6-nanometer process, which features innovative backside power delivery. TSMC plans to have this ready for production in 2026, making a 2028 chip like Feynman a credible possibility. This shifts the focus from the near-term Rubin platform to the next leap in chip density and power efficiency.
However, before Feynman comes Rubin in 2026, and its success is the critical link. The performance of Rubin hinges almost entirely on HBM4, the next generation of high-bandwidth memory. NVIDIA is pushing for higher speeds to outperform competitors like AMD. The entire timeline depends on suppliers like SK hynix and Samsung delivering these advanced memory chips on schedule. Recent updates suggest they are on track, with paid samples already delivered, which is a positive sign for the Rubin launch and, by extension, the Feynman vision.
To further secure its future, NVIDIA is also diversifying its partnerships. The recent collaboration with Intel, including a significant equity stake, provides NVIDIA with more options for system integration and advanced packaging. While NVIDIA has reaffirmed its commitment to TSMC for its most advanced chips, this dual-track approach reduces supply chain risks, ensuring it can deliver on its ambitious roadmap for both Rubin and Feynman.
- HBM4 (High Bandwidth Memory 4): The fourth generation of a high-performance RAM interface for 3D-stacked memory, designed for use in high-performance graphics accelerators and network devices.
- TSMC A16 Node: An advanced semiconductor manufacturing process from TSMC, referring to a 1.6-nanometer class technology. Smaller nodes generally allow for more powerful and energy-efficient chips.
- Backside Power Delivery: A new chip manufacturing technique where power lines are moved to the back of the silicon wafer, freeing up space on the front for more and better signal connections, improving performance and efficiency.