Not wanting to be outdone by Intel showing off Stable Diffusion running on its new Meteor Lake CPU, AMD has joined the AI battle with its own Computex demo.
Our sister site, Tom's Hardware, got a taste of the new AI engine in AMD's Pheonix APU doing its thing. Phoenix is the APU that forms of the basis of not only AMD's Ryzen 7040 series laptop processors, but also the Z1 chip in the Asus RoG Ally. So, the silicon isn't new.
But this is the first time we've seen Phoenix's new AI core, known as XDNA AI, actually do something. Slightly oddly, AMD seemingly has no firm plans to put the XDNA engine from Phoenix into its desktop CPUs. For now, it's a laptop and handheld exclusive.
Anywho, the demo platform was an Asus Strix Scar 17 with a Ryzen 9 7940HS chip. Unlike Intel's AI engine in Meteor Lake, the XDNA engine doesn't show up as a discrete component in Task Manager in Windows.
Whatever, the demo involved accelerating a facial recognition task but didn't generate any comparative numbers. So, there's no measure for how much better Phoenix was at the task compared with, say, running it on and CPU, GPU or some combination of the two.
Still, for what it's worth AMD reckons its XDNA engine is faster than the equivalent Neural engine in Apple's M2 chip, though hasn't made any claims compared to Intel's AI tile in Meteor Lake, otherwise known as a VPU or "Versatile Processing Unit".
AMD has announced a new set of tools to help developers code for the XDNA engine. But we don't have many if any examples of software or apps that can actually use XDNA for now.
The general idea is for the XDNA AI engine to accelerate light AI inferencing workloads including audio, video and image processing and do so faster more efficiently than a CPU or GPU. The net result should be both lower latency for such tasks, for instance real-time audio processing or background blurring, and better battery life while doing it.
How useful any of these AI cores will be remains to be seen. But at the very least, there's a whole new word salad of XDNA engines, VPUs and inferencing workloads to get used to. Fun, fun, and thrice we unreservedly exclaim, fun.