'From the developer's standpoint, they love this strategy'—AMD's plan to merge its RDNA and CDNA GPU architectures to a unified system called UDNA

A stylised image of AMD's RDNA 3 GPU design for its Radeon RX 7000-series graphics cards
(Image credit: AMD)

Ever since it bought GPU firm ATI Technologies in 2006, AMD has experimented with all kinds of graphics architectures. Its current approach, one for PC clients and the other for data and AI servers, will be swapped in favour of a completely unified architecture, bringing the best of both systems into one developer-friendly architecture.

That's according to AMD's senior vice president, Jack Huynh, who announced the company's GPU plans in an interview with Tom's Hardware. At the moment, there's not an awful lot of detail as to what the plan will entail or when we can expect to see the first products based on the architecture, but we do know what it will be called, and it's UDNA.

Most of today's PC and console gamers will only be familiar with AMD's RDNA design, which first appeared in 2019 with the Radeon RX 5700 XT. It was a comprehensive overhaul of AMD's GCN (Graphics Core Next) architecture, greatly improving the IPC, efficiency, and overall performance. From a game developer's point of view, it was a much-needed improvement as the design was very focused on gaming.

GCN dates all the way back to 2012 and despite evolving through multiple iterations, it was still fundamentally the same compute-focused design. In many ways, when it first appeared, it was an architecture ahead of its time but as games became increasingly reliant on compute shaders to handle the bulk of the rendering workload, the issues with GCN (such as its low IPC) began to hold things back.

However, AMD didn't abandon it altogether when it turned to using RNDA in its GPU. Instead, it was reborn as CDNA and the latest revision of it is used to power the likes of the enormous Instinct MI300 accelerator.

Nvidia took a somewhat different approach, though, and for a very long time used the same architecture in its gaming chips as it did for its workstation and server processors. That's still the case today, although there are some differences between client-focused and server-focused GPUs, such as the amount of L1 and L2 cache. Fundamentally, though, and certainly in terms of software, Nvidia's designs are pretty much the same.

So it's not too surprising that AMD feels the time is right for it to merge its GPU architectures into one system that's equally at home in any application.

Talking to Tom's Hardware, Huyhn said "[s]o, part of a big change at AMD is today we have a CDNA architecture for our Instinct data center GPUs and RDNA for the consumer stuff. It’s forked. Going forward, we will call it UDNA. There'll be one unified architecture, both Instinct and client. We'll unify it so that it will be so much easier for developers versus today, where they have to choose and value is not improving."

AMD's final graphics card with a GCN GPU, the Radeon VII.

One significant change this should bring is the implementation of dedicated matrix-calculation cores in all AMD GPUs. Both Intel and Nvidia have these already (the former calls them XMX AI Engines, whereas the latter calls them Tensor cores) and they're put to good use in upscaling and frame generation.

While current RDNA GPUs can also do such calculations, they're done on the standard shader units; only the hulking CDNA GPUs have matrix cores.

But if you're hoping to see UDNA soon then you're going to be disappointed. When Tom's Hardware asked them how long it would take, Huynh replied "[w]e haven’t disclosed that yet. It’s a strategy. Strategy is very important to me. I think it’s the right strategy. We’ve got to make sure we’re doing the right thing."

Your next upgrade

Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

I doubt that AMD has only just made this decision but even if it started the ball rolling at the beginning of this year, it could take two or three years before we see a UDNA-powered Radeon graphics card. At the very least, we'll have a generation of RDNA 4 GPUs and probably RDNA 5 before UDNA makes an appearance.

Perhaps the first platform to showcase UDNA won't be a graphics card but rather a console. A healthy chunk of AMD's revenue comes from selling its semi-custom APUs to Microsoft and Sony, and the Series X/S and PlayStation 5 were the first chips to sport AMD's ray tracing technology.

A fully unified GPU architecture should make it easier for developers to create games that work just as well on a console, as they do on a handheld gaming PC or full desktop rig.

With luck, AMD will share more details about the plan in the coming months.

Nick Evanson
Hardware Writer

Nick, gaming, and computers all first met in 1981, with the love affair starting on a Sinclair ZX81 in kit form and a book on ZX Basic. He ended up becoming a physics and IT teacher, but by the late 1990s decided it was time to cut his teeth writing for a long defunct UK tech site. He went on to do the same at Madonion, helping to write the help files for 3DMark and PCMark. After a short stint working at Beyond3D.com, Nick joined Futuremark (MadOnion rebranded) full-time, as editor-in-chief for its gaming and hardware section, YouGamers. After the site shutdown, he became an engineering and computing lecturer for many years, but missed the writing bug. Cue four years at TechSpot.com and over 100 long articles on anything and everything. He freely admits to being far too obsessed with GPUs and open world grindy RPGs, but who isn't these days? 

TOPICS