SK hynix and SanDisk have officially launched a consortium to standardize a new memory technology called High Bandwidth Flash, or HBF.
This move addresses a critical challenge in the world of artificial intelligence. As AI shifts from the 'training' phase to the 'inference' phase—where models are actively used—the demand for memory is changing. Inference requires not just speed, but also massive capacity and extreme power efficiency. Current top-tier memory, HBM, is incredibly fast but limited in capacity and expensive. This created a need for a new memory tier that sits between HBM and slower, high-capacity SSDs.
Enter HBF. It promises to deliver HBM-like bandwidth with 8 to 16 times the capacity, all while keeping costs and power consumption down. This is precisely what companies running large-scale AI services are looking for.
The timing for this initiative is no coincidence. Several factors have paved the way for HBF's arrival. First, the market has sent strong signals over the past year. HBM supplies have been tight, and NAND flash prices saw a sharp increase in late 2025, making a cost-effective, high-capacity alternative very attractive. This economic pressure was reinforced by soaring stock prices for both SanDisk and SK hynix, reflecting investor confidence in new AI memory solutions.
Second, the groundwork was laid through strategic collaboration. The journey began with a formal agreement between the two companies in August 2025. This partnership gained credibility and momentum, leading to the decision to pursue an open standard instead of a proprietary one.
Finally, choosing the Open Compute Project (OCP) as the home for this standard is a crucial step. The OCP is a community where major tech companies, including hyperscalers, collaborate on data center hardware. Standardizing HBF within the OCP prevents a single company from controlling the technology, reduces risks for adopters, and encourages the development of a broad ecosystem of compatible hardware. It turns a promising idea into a credible industry-wide program.
- HBM (High Bandwidth Memory): A type of high-performance RAM used in high-end GPUs and AI accelerators, known for its very high speed but smaller capacity and high cost.
- AI Inference: The process of using a trained AI model to make predictions or decisions on new, real-world data.
- OCP (Open Compute Project): An organization that shares designs of data center products and solutions among companies to accelerate innovation.