SanDisk has officially started building a pilot production line for its new memory solution, High Bandwidth Flash (HBF).
This move addresses a growing challenge in the world of artificial intelligence. As AI models for tasks like image recognition and language translation—known as inference—become larger, they require vast amounts of data called 'weights' to be stored right next to the processor. The current go-to solution, HBM (High Bandwidth Memory), is incredibly fast but is also expensive and offers limited capacity, typically in the hundreds of gigabytes. On the other hand, SSDs offer huge capacity but are far too slow, creating a significant performance gap.
HBF is designed to be the perfect middle ground, creating a new 'near-memory' tier. It works by stacking NAND flash chips vertically, similar to HBM, to achieve very high speeds—approaching 1.6 terabytes per second. However, because it's based on flash technology, it offers massive capacity, potentially reaching 4 terabytes in a single package. This is over 20 times the capacity of a typical HBM setup. Furthermore, it's a non-volatile memory, meaning it retains data even when the power is off.
The development of HBF didn't happen in a vacuum. It's the result of several converging trends. First, the persistent supply shortages and high cost of HBM created strong economic demand for a more capacity-rich alternative. Second, the manufacturing technology needed for such a complex product, like advanced 3D stacking and hybrid bonding, has matured thanks to investments made for the next generation of HBM.
Building on this foundation, SanDisk and SK hynix began formally collaborating. They signed an initial agreement in August 2025, which led to the launch of a global standardization effort under the Open Compute Project (OCP) in February 2026. The decision to build a pilot line now is the logical execution of this clear and public roadmap. This positions HBF not as an experiment, but as a strategic product aimed at reshaping the economics of AI hardware.
- HBM (High Bandwidth Memory): A type of high-performance RAM that stacks memory chips vertically to achieve very high data transfer speeds, commonly used in high-end GPUs for AI.
- AI Inference: The process of using a trained AI model to make predictions or decisions on new, unseen data.
- Non-volatile memory: A type of computer memory that can retrieve stored information even after having been power cycled.
