Nvidia is set to spend at least $10B to secure its share of limited 5nm chip supply

Nvidia GPU
(Image credit: Nvida)
Audio player loading…

If you’re a fabless company that relies on external chip manufacturing from companies like TSMC, you’re forced to compete for limited wafer starts, particularly on the bleeding edge nodes. The use of a smaller node dramatically alters the fundamental performance capabilities and even the essential viability of your chip. That's why industry giants like Apple, Qualcomm, AMD and Nvidia all spend billions to secure their supply. If they don’t pay up, then there's no (or fewer) chips for you!

According to Hardware Times (opens in new tab), Nvidia is set to pay the exorbitant amount of nearly $10 billion to secure its share of TSMC 5nm capacity. Nvidia is geared up to produce its next generation Ada Lovelace GPU on TSMC's N5 node. Of course this next gen GPU family is likely to be referred to as the RTX 40 series. No one outside of the management of both companies knows extactly how the fees break down or the timeframe, but as noted by Hardware Times, Nvidia spent $9 billion in just the third quarter on inventory and prepayment for future products. This would indicate that Nvidia is already paying through the nose well in advance.

Securing 5nm production is critical to the success of the RTX 40 series. A node shrink can deliver a combination of higher clocks and better power efficiency, and this can make the difference between winning or losing a head to head battle. Though oversimplifying things, It may be that Nvidia’s use of Samsung 8nm is a reason that its products cannot clock as high as 7nm products from AMD, despite using more power.

The other advantage the use of a smaller node offers is better yield. A 300mm wafer can fit more chips on it if the chips themselves are smaller. More chips per wafer equals more profit.

Tips and advice

The Nvidia RTX 3070 and AMD RX 6700 XT side by side on a colourful background

(Image credit: Future)

How to buy a graphics card (opens in new tab): tips on buying a graphics card in the barren silicon landscape that is 2022

The GPU shortage of 2021 may have caused gamer anxiety, but it sure didn’t do Nvidia any harm, with the company reporting gaming revenue in the January quarter of $3.42 billion, up 37% from the same period a year prior (opens in new tab)

The great hope is that mining demand is beginning to drop off, as crypto prices fall and PoW GPU mining falls out of favour, leaving more graphics cards to make their way into the hands of gamers, and not those of miners.

As gamers, all we want to do is have reasonably priced options available to us when we want them. Let’s hope that there’s enough 5nm capacity to go around to at least satisfy gamer demand. Along with Arc cards from Intel (opens in new tab), The next heavyweight battle begins later in 2022: AMD RDNA 3 (opens in new tab) vs Nvidia RTX 40 (opens in new tab). Both on the same process node. Pound for pound, punch for punch. Fight!

Chris Szewczyk
Hardware Writer

Chris' gaming experiences go back to the mid-nineties when he conned his parents into buying an 'educational PC' that was conveniently overpowered to play Doom and Tie Fighter. He developed a love of extreme overclocking that destroyed his savings despite the cheaper hardware on offer via his job at a PC store. To afford more LN2 he began moonlighting as a reviewer for VR-Zone before jumping the fence to work for MSI Australia. Since then, he's gone back to journalism, enthusiastically reviewing the latest and greatest components for PC & Tech Authority, PC Powerplay and currently Australian Personal Computer magazine and PC Gamer. Chris still puts far too many hours into Borderlands 3, always striving to become a more efficient killer.