AI hype: less cache and more NPUs in upcoming processors?


Everyone’s talking about AI, and it’s in AI that the PC industry is pinning its hopes for boosting sales. As a result, AMD and Intel have redesigned their CPUs to give more space to the NPU…to the detriment of cache. We’ve mentioned this several times, but driven by the implementation of AI within Windows, the PC industry believes that the presence of AI gas pedals will rapidly become indispensable. Intel has set the ball rolling with its Meteor Lake, and AMD also intends to join the party with the arrival of its Strix Point APUs. We learn that AMD has delayed their arrival to modify their design and implement no less than 3 XDNA 2 NPUs . However, according to rumors, this NPU space was originally reserved for an SLC cache that would have considerably boosted the performance of the Zen 5 processor part and the RDNA 3 iGPU. With this new combination, AMD will have a more powerful solution than Intel Meteor Lake for accelerating AI.

AMD Strix Point NPU

You have to pay to (not) see the promise of NPUs

At the moment, we’re still waiting to see the supposed benefits of AI hardware acceleration in new processors. The obvious example here is the Meteor Lake processors, which are undoubtedly design successes, but are unable to take advantage of their NPUs in everyday use. If Microsoft’s much-vaunted promises fail to materialize, it will be difficult for AMD and Intel to justify the fact that their new chips take a back seat to standard CPU/iGPU performance, in favor of an NPU that end-users see absolutely no point in using.

Lunar Lake NPU

In a few weeks’ time, Intel’s new Lunar Lakes promise a huge gain in NPU processing power…For the moment, the behavior of the rest of the SoC is treated as secondary in the communication. Let’s just hope the PC AI bubble doesn’t burst in mid-air before then.

What do you think?