192 GB of HBM3e for the NVIDIA B100!

0

NVIDIA’s next-generation graphics card, Blackwell, is still on the agenda. However, we’re going to talk server and HPC (High Performance Computing) with the B100. While the announcement of the architecture is expected tomorrow, a rumor about the board’s configuration is just around the corner!

B100: 192 GB HBM3e memory?

NVIDIA B100 indice performance

With this new product, we’re expecting something truly monstrous in the field of AI. In fact, a leaked slide shows the performance index pointing ” to the moon “. Clearly, the H100 and H200, the best-performing cards in the industry at the moment, are going to be a thing of the past.

Another scary point is the price. Currently, an H100 costs at least $20,000. With this B100, we can reasonably expect an even higher price tag.

In addition, the board will benefit from two dies using TSMC’s CoWoS-L(Chip-on-Wafer-on-Substrate) assembly. The same applies to engraving, which has also been entrusted to the Taiwanese company, which has also seen its electricity bills rise.

Speaking of electricity, data centers are likely to see theirs increase too. Indeed, this model can be expected to consume an inordinate 1000W per GPU. At least, that’s what a Dell executive told us.

Now, if the details of the GPU are not yet known, the memory configuration is more evoked. This model of card would benefit from a very large quantity of VRAM, since we’re talking about 192 GB of HBM3e. We’re talking about eight 8-Hi memory stacks. As a reminder, AMD already offers this capacity on its MI300X. And all this is just the beginning, since 288 GB capacities have already been mentioned for the B200… Anyway, these are just suppositions, a matter to be followed.