The Nvidia RTX 3090 has rolled into town, offering another tantalising glimpse at the green team's new Ampere GPU architecture, all while the world waits to find out when it can lay hands on an RTX 3080 of its very own. But progress, she waits for no-one, and so the new most powerful graphics card in the world is up next on the benchmark block.
But the GeForce RTX 3090 is a special class of graphics card. It sports the most powerful GPU in the Ampere family, comes with a vast frame buffer, and an equally large price tag. In fact the green team wants to coin a whole new class all of its very own: the big ferocious GPU, or BFGPU.
And it's only called that because, as a PG13 company, Nvidia can't actually be seen to call it a 'big fucking GPU.' But it most certainly is; the RTX 3090 is MASSIVE.
Unfeasibly so, in fact. It makes my standard MSI Z490 motherboard look like a mini ITX just by its sheer weight of presence. And it's also worth noting it has a not inconsiderable actual weight too, though with a triple slot bracket there is at least some robustness to its mounting. Your PCIe slot could be in for a hammering if you are wont to shift your PC around a lot, however.
But in whose PC is this chonk $1,500 graphics card going to find itself? The obvious answer is that the RTX 3090—the fastest graphics card now available to humanity—will be snapped up by the elite PC gamer who just has to have the most powerful GPU they can possibly strap into their machine. We all know the type. They don't care about value, common sense, or the fact that at 1440p you're only getting around 7 percent higher frame rates on average than an Nvidia RTX 3080 (opens in new tab).
You tell them the cheaper Ampere card will absolutely deliver all the gaming performance their QHD gaming monitor can handle and they'll still turn around and tell you they want to drop the entirety of their bank balance on an RTX 3090 because they couldn't handle just having the second fastest graphics card.
There is, however, another PC user to whom this monster GPU makes far more sense: the pro-creator. Sure, your big 3D studios and development houses are going to be chock full of Quadro or Tesla GPUs—those professional class Nvidia graphics cards with serious software optimisations and support levels—but there is another class of user that will find a $1,500 GPU eminently worthwhile if it can save them time, pain, and suffering while working on a graphics-intensive job.
These people are arguably even more prevalent in our working-from-home times too. The home-working 3D artist, the freelance editor operating out of a shed at the bottom of their garden, for these people time literally is money. And if a consumer GPU can halve their render time, and enable them to work with larger data sets in almost real time, you can bet they're willing to drop serious cash on it, even if they can't drop Quadro cash. Perhaps especially if they can't afford Quadro cash.
This is the Ampere generation's Titan. That's how you justify selling a GeForce GPU, with only 11 percent higher 4K gaming performance over its closest sibling, with a 114 percent higher sticker price.
Nvidia RTX 3090 Specs
We've covered the new Nvidia Ampere architecture in depth in our RTX 3080 Founders Edition review. It's a smart architectural design, which constructs the GPU on the triple pillars of a redesigned SM with twice as many FP32 units, 3rd gen Tensor Cores, and smarter 2nd gen RT Cores. The RTX 3090 uses the same architecture, and in fact the same essential GPU, as the RTX 3080.
GPU - GA102
Lithography - Samsung 8nm
Die size - 628.4mm2
Transistors - 28.3 bn
CUDA cores - 10,496
SMs - 82
RT Cores - 82
Tensor Cores - 328
GPU Boost clock - 1,695MHz
Memory bus - 384-bit
Memory capacity - 24GB GDDR6X
Memory speed - 19.5Gbps
Memory bandwidth - 936GB/s
TGP - 350W
MSRP - $1,500
Its GA102 GPU is more core-rich than the 8,704 core silicon at the heart of the RTX 3080, though still doesn't come with the full complement of CUDA cores available to the complete design. But with the RTX 3090 sporting 10,496 CUDA cores, we can probably forgive Nvidia for keeping those two SMs and 256 cores back in reserve for emergencies.
Whether we're looking at a comparison against the RTX 2080 Ti, or Titan RTX from the Turing generation—at 4,352 and 4,608 respectively—we're still talking about more than twice the floating point units. Those last-gen cards both used the same GPU, though the $2,000 Titan RTX houses another 256 CUDA cores than the $1,200 2080 Ti, and comes with a full 24GB of GDDR6 vs. 11GB.
Which means, on a spec-for-spec basis you're arguably looking at the RTX Titan as the real comparison here. But I'm always up for an argument, and I'd say that at this price point, and given that the 2080 Ti more or less occupied the Titan segment at the initial launch of the 20-series, that makes the Ti card more of a close comparison.
There are more RT Cores in the RTX 3090 than in either last-gen GPU, yet fewer Tensor Cores. Those little silicon extras represent Nvidia's AI smarts, and the company figured it could get more performance by dropping fewer into the GPU, but making them work harder.
There are notably more CUDA cores in the RTX 3090 than in the RTX 3080, but what gives the top 30-series card its real relevance is that huge frame buffer. At 24GB it really is leaning into proper Titan territory. That was the one place the RTX 2080 Ti wasn't really able to shoulder the Titan burden, and where the subsequent Titan RTX golden boi, with its own 24GB worth of GDDR6 VRAM could.
The RTX 3090, however, is working with another step up in performance video memory. That 24GB is built on the newer GDDR6X technology, offering much greater throughput, higher speeds, and memory bandwidth of 936GB/s versus the 672GB/s of the Titan RTX or the 616GB/s of the 11GB RTX 2080 Ti.
The latest RTX 30-series card is clocked slower, however, with a boost clock of just 1,695MHz. That's still far higher than the RTX 2080 Ti, but a shade off the pace of the Titan RTX.
But this is where we need to talk about the cooler, because Nvidia's GPUs don't pay a whole lot of attention to the rated Boost clock speed if they have the cooling chops to go further. And that's going to be something to pay attention to when it comes to the third-party versions to follow. In my testing this Founders Edition card peaks at just 70°C, with an average gaming temperature of 65°C, and that's what allows the GPU inside to actually average out at 1,787MHz. Sometimes a lot quicker.
It's the same innovative offset push-pull design as the RTX 3080, but bigger. Honestly it's like the designer simply hit the multiplier button on the CAD illustration package because it's like the frame has simply been scaled up. It's hilarious seeing them cheek by jowl, because I thought the RTX 3080 looked pretty sizeable until I put the RTX 3090 next to it. But damn, is it effective? It's cool, quiet, and did I mention it was quite intimidating too? I'm going to need a bigger case…
Benchmarks and performance
Nvidia RTX 3090 benchmarks and performance
So yeah, here it is: the fastest graphics card… sorry, I'm getting kinda bored saying that each time a new GPU rolls up. But you know the drill—it's newer, with a chunkier GPU, is more expensive, and therefore faster. The big issue, however, is just how much faster is it than the already spectacularly good RTX 3080?
That inaugural Ampere GPU—hailed by Jen-Hsun as the flagship card of the new RTX 30-series despite knowing the RTX 3090 was following a week after the initial launch— already delivers gen-on-gen performance to make the RTX 2080 Ti look like a mid-range GPU. That's no mean feat from a card that's almost half the price.
You can see why it's been in such demand post release.
And the RTX 3090's own gaming performance only goes to highlight what a stunning new GPU Nvidia has unleashed, and what an uphill battle AMD has got on its hands when it comes to bringing a pair of meaningful RDNA 2, Navi 21 cards to the table in October/November.
And why is that? Because it's not actually that much quicker. Certainly not enough to justify that huge sticker price for the average gamer. I did test 1080p performance, because I'm a completist, at least when it comes to benchmarking. But it's irrelevant... honestly if you're using an RTX 3090 to power a 1080p panel you deserve a slap. Sure, I've been running Football Manager 2020 on it, but at least I've been running it at 4K.
It's borderline pointless discussing its 1440p performance as well, hell the RTX 3080 itself is almost getting on for being CPU bound at that level in a lot of games, and that partially explains why I've seen just a 7.28 percent performance gain for the RTX 3090 at 1440p averaged across my test suite. At best you're talking 12 percent in Horizon Zero Dawn and the same in Metro Exodus with RTX enabled.
We really just need to be looking at 4K performance, and there it's still not a huge uplift over its slightly older Ampere sibling, at just 11.16 percent faster on average. Again it's Metro Exodus RTX which gives the big win, and only then at 16 percent higher than the RTX 3090.
Told you, the RTX 3080 really is ****ing great.
But Nvidia is also talking about 8K gaming, and that the RTX 3090 can actually deliver playable gaming frame rates, with ray tracing enabled, at that heady resolution. It's niche, I'll give you that, but it's also true. In certain circumstances.
Nvidia has created a new DLSS 8K level, or DLSS Ultra Performance. It's a 9x AI super resolution, which essentially upscales a 1440p gameworld to deliver 8K gaming at reasonable frame rates. The promise is that DLSS 8K offers higher image fidelity at upscaled 8K than native 1080p or native 4K images.
I don't currently have an 8K panel to test the veracity of that, but I can speak to the actual performance, with a margin of around 3 percent variance owing to faking it with dynamic super resolution (DSR) using the Nvidia driver. Anyways, using the standard DLSS version currently baked into Wolfenstein Youngblood I managed a still pretty healthy 45fps at 8K. Shifting to the new DLSS Uberperformance mode (because of course it is…) the Nazi robot dogs were running at 77fps.
Remember 8K is 33 million pixels. Each frame. That's four times the raw pixel processing demands of 4K.
Not all games support DLSS Ultra Performance yet, however, not even those which support DLSS (Metro Exodus hit 32fps at 8K while ray tracing). So it's still a little too niche a feature to completely hang a $1,500 purchase on, impressive a start though it is.
But really this isn't a card made for the gamer. Yes, PC gamers will buy it, and five-figure system builds from boutique PC manufacturers will sell a bunch I'm sure, but they're almost by-products of Nvidia slapping the GeForce branding on a proper Titan card, and admitting as such.
CPU - Intel Core i7 10700K
Motherboard - MSI MPG Z490 Gaming Carbon WiFi
RAM - Corsair Vengeance RGB Pro @ 3,200MHz
CPU cooler - Corsair H100i RGB Pro XT
PSU - NZXT 850W
Chassis - DimasTech Mini V2
Monitor - Philips Momentum 558M1RY
And this is where it gets a bit trickier, and I have to go a little off-piste for PC Gamer and into serious rendering territory. I've taken the most recent versions of Maya and Blender for a spin, using the latest updates and RTX rendering plugins, to see what that 24GB of frame buffer will get you.
In short, it gets you some speedy render times, access to features that would have otherwise taken an age to deliver, and real-time manipulation of scenes with huge data footprints.
To be fair, the RTX 3080 already does a lot of that. You can see in the comparative render times that the cheaper Ampere card hoses the RTX 2080 Ti in every test, and again isn't far off the RTX 3090. But those data points don't really speak to the experience of actually trying to manipulate and manage a graphically intense scene in Maya.
It's not quite real-time, but if you want to shift depth of field, or focal lengths on an RTX 3080 you're in for an often frustrating time. It can run dry of that 10GB frame buffer surprisingly quickly and start eating into system memory. And that's when it chugs. The RTX 3090, on the other hand, is better able to keep such scenes entirely within the frame buffer, which will save a huge amount of time in the actual creation process, if not the final render.
Thermals and power
Finally, as you might expect, this 350W TGP card chews on a lot of power to do what it does. Though, that said, in my testing it was only a shade ahead of the lower-spec, third-party Asus TUF Gaming RTX 3080 and Colorful RTX 3080 cards I've put through the wringer.
Nvidia RTX 3090 verdict
This is a toughie because there are very, very few people who I would recommend the $1,500 RTX 3090 to. And none of them are gamers.
This is every inch the Titan card Jen-Hsun said it would be. It's a creator's card, one with a stunningly powerful GPU and a frame buffer that allows personal creation on a level not seen on a GPU this side of $6,000. I hesitate to speak to any value proposition here, but for someone with paid 3D art, or high-end video editing gigs in the pipeline, a $1,500 graphics card with this much power inside it could help pay for itself in a relatively short amount of time.
I've worked with artists (you know who you are, Paul) who picked up the very first Titan the day it launched because of what it meant for their work. It's not some fictional customer type magicked up by the Nvidia marketing department, the Titan cards really were in demand.
On the gaming side, though, the RTX 3090 offers little extra performance over the far more affordable RTX 3080, and I struggle to see why you'd put the money down if all you were aiming to do was hit Night City at 4K. But then I've never been the sort who could justify such wanton excesses of hardware buying just to have the very latest, the very fastest, and hang the financial sense of it.
Maybe that's why I struggle with the GeForce branding being used for a Titan class card. Keep it a Titan. Keep it ostensibly for the creators, but with a little nod to the ultra-enthusiasts. Again, I know some of those people too, people who are willing to spend what it takes to be at the forefront of gaming, who genuinely would regret their RTX 3080 purchase knowing there was an RTX 3090 out there offering higher frame rates.
But Nvidia maintains that by giving it the GeForce moniker it's able to take it out of the exclusive realm the Titan sits in—being sold only in Nvidia trim, from Nvidia's own store—and offer it out to the third-party AIBs to create their own versions. That's fine if it means there are more cards on offer for those who want one, but I can't help but feel we're going to see RTX 3090s sitting dangerously close to the $2,000 mark just for the love of a needless factory overclock. If not more.
The even more cynical side of me wonders if the benefit of having at least two GeForce GPUs ahead of the incoming red-tinted competition was given more than a passing thought as the decision was being made to not title it a Titan. The AMD Navi 21 cards are expected to be sitting somewhere between the RTX 3080 and RTX 3070 after all.
And hell, there's probably still an actual Ampere Titan in the post. There's still that full GA102 silicon as an option, but it does call into question the rumours of the 20GB RTX 3080. If that card does find its way out into the wild it's going to make the RTX 3090 an even tougher recommendation when even its VRAM lead is cut.
For now though, the RTX 3090 is today's GPU Lamborghini.
Well, maybe it's more like a 5-door Lamborghini with a really capacious boot. One you could also use moonlighting as a courier, or Uber driver. A practical supercar, for a specific user then.
Alright, I'm losing this analogy now and likely losing you, but I guess what I'm saying is that this humongous, almost novelty-scale graphics card, is supremely powerful and in some circumstances makes utter, practical sense. It's not worth the money for ninety percent of PC gamers, but is potentially very worth it for today's booming WFH army.