Will there ever become a point with AI where there are no traditionally rendered frames in games? Perhaps surprisingly, Jen-Hsun says 'no'

Meta Orion glasses on show at Meta Connect with Nvidia CEO Jen-Hsun Huang.
(Image credit: Meta)

The new DLSS 4 Multi Frame Generation feature of the new RTX Blackwell cards has created a situation where one frame can be generated using traditional GPU computation, while the subsequent three frames can now be entirely generated by AI. That's a hell of an imbalance, so does one of the people responsible for making this AI voodoo a reality think we'll get to a point where there are no traditionally rendered frames?

Jen-Hsun Huang, Nvidia's CEO, and one of the biggest proponents of AI in just about damned near everything, says: no.

"The context could be a PDF, it could be a web search… and the context in video games has to not only be relevant story-wise, but it has to be world and spatially relevant. And the way you condition, the way you give it context is you give it early pieces of geometry, or early pieces of textures it could up-res from."

Image

Catch up with CES 2025: We're on the ground in sunny Las Vegas covering all the latest announcements from some of the biggest names in tech, including Nvidia, AMD, Intel, Asus, Razer, MSI and more.

TOPICS
Dave James
Editor-in-Chief, Hardware

Dave has been gaming since the days of Zaxxon and Lady Bug on the Colecovision, and code books for the Commodore Vic 20 (Death Race 2000!). He built his first gaming PC at the tender age of 16, and finally finished bug-fixing the Cyrix-based system around a year later. When he dropped it out of the window. He first started writing for Official PlayStation Magazine and Xbox World many decades ago, then moved onto PC Format full-time, then PC Gamer, TechRadar, and T3 among others. Now he's back, writing about the nightmarish graphics card market, CPUs with more cores than sense, gaming laptops hotter than the sun, and SSDs more capacious than a Cybertruck.