Kioxia SSDs to boost the performance of NVIDIA’s AI GPUs?

0
Overclocking.com video

Clearly, NVIDIA is not entirely satisfied with the pace of development of HBM memory. It has to be said that AI consumes a lot of memory capacity. On the other hand, the volume of data to be manipulated for training is also colossal. To this end, the Greens have teamed up with Kioxia to develop SSDs with sufficient performance to replace HBM memory, which the Chameleon seems to consider too restrictive in terms of capacity! AI GPUs soon to be equipped with SSDs as VRAM?

Kioxia: SSDs for NVIDIA’s AI GPUs!

GPU IA

Kioxia’s aim is to offer SSDs that connect directly to the GPU without passing through the CPU, which would be too detrimental to performance. But in order not to lose anything in terms of performance, the SSD part also has to hold its own and offer random performance up to 100 times better than that of consumer SSDs. In broad terms, NVIDIA has set a target of 200 million IOPS for the SSD to be connected to the GPU. Objective which the Japanese firm wants to reach with 2 SSDs.

In addition, a high-performance interface is needed that is not too restrictive in terms of bandwidth. For this, the plan is to take advantage of PCIe 7.0, whose bandwidth should be four times higher than the current PCIe 5.0 standard. Unfortunately, this offer won’t materialize before 2027, at least that’s the launch window currently being mooted.

Now, Kioxia’s targets are ambitious, but it remains to be seen whether the manufacturer will overcome this challenge. On the other hand, just imagine graphics cards backed by SSDs with memory capacities of 2TB, 4TB or even more. Unlike HBM memory, it’s much easier to offer large capacities with NAND. What’s more, we can also imagine hybrid solutions combining HBM and NAND, to take advantage of the best of both worlds: bandwidth for HBM and capacity for NAND.