The biggest PC gaming questions Intel needs to answer in 2021

intel cpu
(Image credit: Intel)

New year, same old problems for the biggest beast in computer chips? Yes, and no. For sure, in 2021 Intel faces the same old questions over its production process problems and CPU roadmap as it has for literally years.

The competition

But there are also intriguingly Rumsfeldian known unknowns concerning Intel's nascent graphics project, the GPU architecture known as Xe and more specifically the upcoming DG2 gaming GPU. 

Can Intel really take the fight to AMD and Nvidia for true gaming performance and in so doing help solve the graphics supply issues that are currently pushing GPU pricing out of control?

Returning to CPUs, the back end of 2021 might—just might—see the arrival of something revolutionary in the form of Intel’s Alder Lake architecture. Question is, will Alder Lake be so radical the software ecosystem, including not just games, but the Windows OS itself, won't be ready? For better or worse, 2021 looks like being a pivotal year for Intel.

Can Intel solve its 10nm woes and get its CPU roadmap back on track?

This is the most critical question of all. Intel originally planned to ship 10nm chips way back in 2015. Here we are in 2021 and we’re still waiting for a full roll out of 10nm products. As we write these words, you can still only buy mobile CPUs up to four cores on 10nm.

Such are the ongoing limitations of Intel’s 10nm production tech, we’re expecting yet another new 14nm family of CPUs, Rocket Lake, to launch in March. That’s pretty remarkable given the first 14nm processors went on sale in 2014.

For now, all bets are off. It's worth repeating that Intel has not only failed so far to scale its 10nm chips beyond four cores, but also has so little faith in 10nm that its future roadmap contains yet another new family of 14nm desktop processors. That is a truly grim narrative.

Indeed, if Intel can’t get 10nm on track by the end of this year, it’s just possible the company that once lead the world in chip manufacturing tech may begin to reconsider the very notion of producing CPUs and other chips in-house.

Intel Rocket Lake 11th Gen logos

(Image credit: Intel)

Will Intel Rocket Lake regain the gaming CPU crown?

With a maximum of eight cores, fair to say that Intel’s upcoming 14nm Rocket Lake CPUs have zero chance of taking the multi-threading crown. After all, AMD will sell you a mainstream CPU platform with 16 cores and 32 threads.

But what about gaming? Few, if any, games scale well beyond eight cores, so performance or instructions per clock (IPC) and per core—plus outright operating frequency—remain critical for gaming performance. 

Rocket Lake uses a new core, known as Cypress Cove, designed originally intended for 10nm but backported to 14nm. Closely related to the Sunny Cove cores found in Intel’s Ice Lake mobile chips, Intel says Cypress Cove cores will deliver double digit percent IPC performance improvements over existing Comet Lake Processors.

Factor in reports of Rocket Lake engineering samples running at 5.3GHz and it’s very possible that it could retake the single-core performance crown away from AMD’s Zen 3-based Ryzen 5000 processors despite the disadvantages of that ancient 14nm process.

The catch, apart from the eight-core limitation, will likely be heat and power consumption. With huge, power hungry cores originally intended for 10nm making do with 14nm transistors, Rocket Lake may set records beyond single-core performance. It might create a new standard for TDP and heat generation. And not in a good way.

Oh, and one more bit of good news re Rocket Lake. It will finally bring PCI Express 4.0 support to the desktop for Intel. That's no biggie for graphics cards, but means next-gen SSDs are finally go for Intel.

Intel DG1 SDV graphics card

(Image credit: Intel)

Can Intel’s Xe graphics take the fight to AMD and Nvidia and stop GPU prices from spiralling out of control?

What with Intel’s 10nm woes and its patchy record in consumer graphics (anyone remember Larrabee?), it would be a brave soul that gave Intel the benefit of the doubt over Xe, its new graphics architecture.

And yet there are positive signs. Xe is already out in mobile form, both as the integrated graphics solution in its Tiger Lake mobile CPUs and in discrete form in DG1, a GPU designed for thin-and-light laptops.

Early signs are that in Xe Intel has a decent graphics architecture that should be at least reasonably competitive if it can scale up to a high performance desktop GPU.

Intel has indeed confirmed that a high performance graphics card, known as DG2 and aimed at enthusiasts, is coming later this year. The latest indications from Intel’s graphics driver releases point to a chip with 512 of Intel’s execution units and raw graphics processing power comparable to AMD’s latest Radeon RX 6000 GPUs. Earlier rumours likewise pointed to a gaming card roughly on par with Nvidia’s GeForce RTX 3070.

If that proves accurate, DG2 will inject a very welcome added level of competition into the graphics markets at a time when supply is almost non-existent and prices are spiralling out of control.

The catch, as ever with Intel, could be manufacturing. The good news, arguably, is that Intel has said production of DG2 will be farmed out to a third party fab, which most observers assume will be TSMC. The bad news is that TSMC is already struggling to keep AMD supplied with chips. So the supply side constraint impacting the whole graphics market would seem initially unlikely to be helped by another company competing for a limited supply of TSMC wafers.

The unknown factor is whether Intel’s rumoured use of TSMC’s 6nm half node in the context of AMD remaining on TSMC 7nm might make a differnce. Could that allow Intel to ramp up some volume? Here’s hoping, because Intel Xe doesn’t need to be world beating to make a huge difference. Nvidia RTX 3070 class performance for less money would be a beautiful thing.

Intel Alder Lake render

(Image credit: Intel)

Can Intel regain the performance and technological leadership with Alder Lake?

That’s the big question. Or should that be little question? After all, one of Alder Lake’s most important innovations is the adoption of a so-called big.LITTLE architecture to desktop and laptop PCs.

In other words, if Alder Lake arrives as expected in late 2021 it will combine large high performance cores with smaller, low power cores in a single CPU. It’s an approach first seen in ARM-based chips for phones and tablets and more recently in Apple’s M1 processor.

In that sense, it’s neither truly new nor radical. But in the context of the PC it’s both novel and potentially problematical. Firstly, while the benefits in terms of battery life for laptops of the smaller cores are obvious enough, it’s not clear why big.LITTLE is an advantageous approach at all for a desktop PC. Sure, the smaller cores will presumably lend a helping hand in multi-threaded tasks. But would the die space they take up be better utilised by more full-power cores?

Then there’s the question of software support. Without full operating system awareness of the topology of such a hybrid architecture, performance-critical threads would inevitably end up on the small cores at least some of the time, compromising performance.

Early testing of Intel’s Lakefield chip, a sort of test bed for the big.LITTLE approach which combines a single big Sunny Cove core with a quartet of low-power Tremont cores, has proven that the latest Windows kernel is capable of scheduling threads appropriately to at least some degree. But that’s a long way from being a guarantee that scheduling of threads on a high performance hybrid x86 CPU in Windows will always be optimal and transparent to applications.

For what it’s worth, AMD has been openly sceptical about the benefits of such hybrid architectures for traditional PCs. And it’s hardly a given that Intel will even get Alder Lake out the door this year. But if it does, it will certainly make for an intriguing technological battle with Intel in the unusual position of being, arguably, the underdog.

Jeremy Laird
Hardware writer

Jeremy has been writing about technology and PCs since the 90nm Netburst era (Google it!) and enjoys nothing more than a serious dissertation on the finer points of monitor input lag and overshoot followed by a forensic examination of advanced lithography. Or maybe he just likes machines that go “ping!” He also has a thing for tennis and cars.