Skip to main content

The most iconic, game-changing graphics cards ever made

Nvidia GeForce GTX Titan
(Image credit: Fritzchens Fritz)

Sometimes it's easy to forget about where we came from in PC gaming, especially while we're gazing into our crystal balls for any glimpse of graphics cards future. But there's actually a lot that we can glean from the annals of GPU history—the colossal leaps in power and performance that GPUs have taken in under 25 years goes some way to explaining why today's best graphics card costs $699.

You have to walk before you can run, and just like any trepidatious toddler there were many attempts to nail an image resolution of just 800 x 600 before anyone could dream up rendering the pixel count required to run the best pc games at 4K (that's 480,000 pixels per frame vs. 8,294,400, by the way). 

Yet you'd also be surprised by just how many features so prevalent in modern GPUs were first introduced back at the dawn of the industry. Take SLI, for example. It might be on its way out right now but dual-wielding graphics cards played an important role in the past decade of GPU performance. And would you believe it was possible for a PC user to connect two cards together for better performance way back in 1998?

But let's start right at the beginning—when active cooling was optional and there were chips aplenty.

3dfx Voodoo

Image 1 of 2

3dfx Voodoo 3D accelerator

(Image credit: Fritzchens Fritz)
Image 2 of 2

3dfx Voodoo 3D accelerator

(Image credit: Fritzchens Fritz)

3dfx Voodoo

Year: 1996 | Clock speed: 50MHz | Memory: 4/6MB | Process node: 500nm

It's March, 1996—England is knocked out of the Cricket World Cup by Sri Lanka, a young boy celebrates his fourth birthday (that's me), and 3dfx releases the first of what would be a couple of game-changing graphics cards: the Voodoo.

Clocked at just 50MHz and fitted with a whopping 4/6MB of total RAM, the Voodoo was clearly the superior card for 3D acceleration at the time. The top spec could handle an 800x600 resolution, but the lower spec was capable of only 640x480. Despite its 2D limitations, it would prove a highly successful venture, and set 3dfx on a trajectory into PC gaming fame—and eventually into Nvidia's pocket.

Note: the 3dfx Voodoo is often referred to as the Voodoo1, although that name only caught on after the release of the Voodoo2. Similarly, 3dfx should be formatted as 3DFX, but since most technology companies also wish to be referred to in this way, no one gets to do it.

Nvidia Riva 128

Nvidia Riva 128 ZX 3D accelerator

(Image credit: Fritzchens Fritz)

Nvidia Riva 128

Year: 1997 | Clock speed: 100MHz | Memory: 4MB | Process node: SGS 350nm

Personal pick: Nvidia 8800GT

Nvidia 8800 GT graphics card

(Image credit: Future)

Dave James: I've gamed on more graphics cards than I can remember. My first Voodoo2 was transformative, the Riva TNT was ace, and I've since had twin Titans and dual-GPU Radeon cards in my home rigs, but none hold so dear a place in my heart as the glorious 8800GT. Forget the monstrous 8800GTX, the simple GT combined stellar performance, great looks, and incredible value. I've still got my single-slot, jet-black reference card (pictured above), and will never part with it.

A chipset company by the name of Nvidia would soon offer real competition to the 3dfx in the form of the Nvidia Riva 128, or NV3. The name stood for 'Real-time Interactive Video and Animation', and it integrated both 2D and 3D acceleration into a single chip for ease of use. It was a surprisingly decent card following the Nvidia NV1, which had tried (and failed) to introduce quadratic texture mapping.

This 3D accelerator doubled the initial spec of the Voodoo1 at 100MHz core/memory clock, and came with a half-decent 4MB SGRAM. It was the first to really gain traction in the market for Nvidia, and if you take a look at the various layouts—memory surrounding a single central chip—you can almost make out the beginnings of a long line of GeForce cards, all of which follow suit.

But while it offered competition to 3dfx's Voodoo1, and higher resolutions, it wasn't free of its own bugbears—and neither would it be alone in the market for long before a 3dfx issued a response in the Voodoo2.

Pictured above is the Nvidia Riva 128 ZX, a slightly refreshed take on the original chip.

3dfx Voodoo2

Image 1 of 2

3dfx Voodoo2 3D accelerator

(Image credit: Fritzchens Fritz)
Image 2 of 2

3dfx Voodoo2

(Image credit: Fritzchens Fritz)

3dfx Voodoo2

Year: 1998 | Core clock speed: 90MHz | Memory: 8/12MB | Process node: 350nm

Now this is a 3D accelerator that requires no introduction. Known far and wide for its superb performance at the time, the Voodoo2 is famed for its lasting impact on the GPU market.

A smorgasbord of chips, the Voodoo2 featured a 90MHz core/memory clock, 8/12MB of RAM, and supported SLI—once connected via a port on twinned cards, the Voodoo2 can support resolutions up to 1024 x 768.

3dfx managed to stay on top with the Voodoo2 for some time, but it wasn't long until the company would make a few poor decisions and be out of the graphics game entirely.

Nvidia GeForce 256

Nvidia GeForce 256 graphics card

(Image credit: Wikimedia - User: Hyins)

Nvidia GeForce 256

Year: 1999 | Core clock speed: 120MHz | Memory: 32MB DDR | Process node: TSMC 220nm

The first bearing the GeForce name still in use today, the GeForce 256 was also the "world's first GPU'. 'But what about the Voodoos and the Rivas?', I hear you ask. Well, clever marketing on Nvidia's part has the GeForce 256 stuck firmly in everyone's minds as the progenitor of modern graphics cards, but it was really just the name Nvidia gave its single-chip solution: a graphics processing unit, or GPU. 

Personal pick: Gainward GeForce4 Ti 4200

Gainward GeForce4

(Image credit: Gainward)

Paul Lilly: You always remember your first, and for me, it was the Voodoo2, which found a home in an overpriced Compaq I bought from Sears (before I started building my own PCs). But it's the Gainward GeForce4 Ti 4200 I bought several years later that stands out as one my best GPU purchases. It arrived a couple months after Nvidia launched the Ti 4600 and 4400, and quickly cemented itself as the clear bang-for-buck option over those two pricier models. It was fast, comparatively affordable, and dressed in a bright red PCB.

As you can probably tell, this sort of grandiose name, a near-parallel to the central processing unit (CPU) that had been raking in ungodly wads of cash since the '70s, was welcomed across the industry.

That's not to say the GeForce 256 wasn't a worthy namesake, however, integrating acceleration for transform and lighting into the newly-minted GPU, alongside a 120MHz clock speed and 32MB of DDR memory (for the high-end variant). It also fully-supported Direct3D 7, which would allow it to enjoy a long lifetime powering some of the best classic PC games released at that time.

Here's an excerpt from a press release on integrated transform and lighting that Nvidia released that year:

"The developers and end-users have spoken, they're tired of poor-quality artificial intelligence, blocky characters, limited environments and other unnatural looking scenes," said David Kirk, chief technologist at Nvidia. "This new graphics capability is the first step towards an interactive Toy Story or Jurassic Park."

Nvidia GeForce 8800 GTX

Image 1 of 1

Nvidia GeForce 8800 GTX

(Image credit: Future)

Nvidia GeForce 8800 GTX

Year: 2006 | Core clock speed: 575MHz | Memory: 768MB GDDR3 | Transistors: 681 million | Process node:  TSMC 90nm

Once Nvidia rolled out the GeForce 8800 GTX, there was no looking back. If you want to talk about a card that really got peoples' attention, and threw aside all regard for a compact size, it's the GeForce 8800 GTX.

With 128 Tesla cores inside the much-anticipated G80 GPU, and 768MB of GDDR3 memory, the 8800 isn't an unfamiliar sight for a modern GPU shopper—even if it might be a little underpowered by today's standards. It ruled over the GPU market at launch, and stuck around for some time afterwards thanks to a unified shader model introduced with the architecture and Direct3D 10. And it could run Crysis.

ATI Radeon HD 5970

Image 1 of 1

AMD Radeon HD 5970

(Image credit: Future)

ATI Radeon HD 5970

Year: 2009 | Core clock speed: 725MHz | Memory: 2,048MB GDDR5 | Transistors: 4,308 million | Process node: 40nm

And what's AMD been doing all this time? Semiconductor company ATI was busy building heaps of bespoke console chips right the way through the '90s and early 2000s, and made some excellent GPUs in its own right, such as the X1900 XTX. It was later purchased by AMD in 2006 and the Radeon brand gently incorporated under the AMD umbrella. After the abortive HD 2000 and 3000 series, the HD 4870 and 4850 were quality cards, but the one which made the biggest splash after the move was the Radeon HD 5970.

The Radeon 5970 was essentially a large Cypress GPU, 1,024MB pool of memory, and sizeable 256-bit memory bus… multiplied by two. Yep, this dual-GPU enthusiast card was doubly good due to its doubled component count, and it used all that power to lord over the gaming landscape.

The tradition of stuffing two chips into one shroud continued right the way up to the AMD Radeon R9 295X2 and the Nvidia Titan Z. But once multi-GPU support started dwindling, single cards became the predominant form factor. Furthermore, with multi-GPU support entirely in the developers' court due to the introduction of DirectX 12, they may never make a return.

Nvidia GTX Titan

Nvidia GTX Titan graphics card side on

(Image credit: Future)

Nvidia GTX Titan

Year: 2013 | Core clock speed: 837MHz | Memory: 6,144MB GDDR5 | Transistors: 7,080 million | Process node: TSMC 28nm

Nvidia released the original GTX Titan back in 2013, and at the time it was intended to sit atop of the gaming world as the pinnacle of performance. And arguably it was the first ultra-enthusiast consumer card to launch.

The GTX Titan comes with 2,688 CUDA Cores, and could boost up to 876MHz. The memory, a whopping 6GB of GDDR5, remains more capacious than even some of the best graphics cards sold today, and with a 384-bit bus it was capable of keeping up with just about anything at the time.

Nvidia threw sense out of the window with the original GTX Titan, and it was enough to get anyone's blood pumping. A single GPU with a state-of-the-art architecture that could tear through the latest games—suddenly two cards became an unnecessary nuisance.

Nvidia GTX 1080 Ti

Image 1 of 4

Nvidia GeForce GTX 1080 Ti graphics card from multiple angles

(Image credit: Nvidia)
Image 2 of 4

Nvidia GeForce GTX 1080 Ti graphics card from multiple angles

(Image credit: Nvidia)
Image 3 of 4

Nvidia GeForce GTX 1080 Ti graphics card from multiple angles

(Image credit: Nvidia)
Image 4 of 4

Nvidia GeForce GTX 1080 Ti graphics card from multiple angles

(Image credit: Nvidia)

Nvidia GeForce GTX 1080 Ti

Year: 2017 | Core clock speed: 1,480MHz | Memory: 11GB GDDR5X | Transistors: 12bn | Process node: TSMC 16nm

There's no looking over the Pascal architecture, and its crowning glory, the GTX 1080 Ti, when discussing the most influential graphics cards to date. Pascal was a truly dominant high-end architecture from the green team, and the company's efforts in lowering performance per watt up to that point came together to create one of the finest cards that we're still enamoured with today.

At the time, the 250W GTX 1080 Ti was capable of dominating the field in gaming performance. It crushed the Titan X Pascal across the board, and even when Nvidia finally put together the faster $1,200 Titan Xp sometime later, the $699 GTX 1080 Ti damn near crushed that one, too—in performance per dollar, it was the obvious best pick.

AMD RX 580

AMD Radeon RX 580 graphics card from side

(Image credit: Future)

AMD Radeon RX 580

Year: 2017 | Core clock speed: 1,257MHz | Memory: 4GB/8GB GDDR5 | Transistors: 5.7bn | Process node: GlobalFoundries 14nm

Personal pick: Nvidia GeForce 6600 GT

Nvidia GeForce 6600 GT

(Image credit: Axle)

Alan Dexter: I've always had a fondness for budget graphics cards, and the release that stands out for me is the GeForce 6600GT. A true budget offering ($200 at launch) it had an ace up its sleeve in the shape of SLI support. So while you could happily just about get by with one card, sliding another identical card into your system almost doubled performance (except when it didn't... hey, SLI was never perfect).

It would be remiss of me not to mention the RX 580, and by extension the RX 480 it was built on, in this round-up. This highly-affordable Polaris GPU was my personal go-to recommendation for anyone looking to build their first PC for 1080p gaming.

Sitting a touch above the RX 570, which eventually helped redefine entry-level desktop gaming performance, the RX 580 is the epitome of AMD's fine-wine approach. Yeah, forget the faux Vega vintage, it was the Polaris GPUs that only ever got cheaper and more performant as time went on.

The RX 580 has done more for PC gaming affordability that most graphics cards today. Hell, it was such a superb card even the crypto-miners couldn't get enough. It's a standard by which all further AMD and Nvidia mainstream GPUs were to be compared against, and we're all the better for it.

Nvidia RTX 3080

Image 1 of 3

Nvidia RTX 3080

(Image credit: Future)
Image 2 of 3

GeForce RTX 3080

(Image credit: Nvidia)
Image 3 of 3

Nvidia RTX 3080 Founders Edition

(Image credit: Future)

Nvidia GeForce RTX 3080

Year: 2020 | Core clock speed: 1,440MHz | Memory: 10GB GDDR6X | Transistors: 28.3bn | Process node: Samsung 8nm

The generational leap the Nvidia RTX 3080 delivered when it was released this year makes it an instant classic and an easy addition to the list. As the flagship graphics card of the Ampere generation it's actually not the most powerful card in the family. But it's so close to the performance of the Nvidia RTX 3090 that it almost makes that $1,500 GPU irrelevant. It certainly does for gamers, at least.

Launched at $699 it comfortably outperformed the top GPU of the previous GeForce cards, the Nvidia RTX 2080 Ti and that was the definition of graphics performance at all costs coming in at $1,200. The architecture switch which gives it almost double the FP32 CUDA cores made it a stellar rasterized graphics chomper, but its 2nd Gen RT cores and 3rd Gen Tensor Cores meant it practically negated the impact of enabling real-time ray traced effects in-game.

AMD RX 6800 XT

Image 1 of 3

AMD Radeon RX 6800 XT

(Image credit: AMD)
Image 2 of 3

AMD RX 6800 XT and RX 6800 graphics card on table

(Image credit: Future)
Image 3 of 3

AMD RX 6000-series GPU

(Image credit: AMD)

AMD Radeon RX 6800 XT

Year: 2020 | Core clock speed: 2,015MHz | Memory: 16GB GDDR6 | Transistors: 26.8bn | Process node: TSMC 7nm

The AMD RX 6800 XT makes it on to the list as one of the finest Radeon GPUs ever made, and one of the best graphics cards around. From essentially nowhere in the high-end GPU game, AMD managed to turn its Navi architecture from mainstream to world-beating in one generation. It's an incredible feat and shows the impact the Zen engineers have had on the graphics side of the company.

For less than the price of the RTX 3080, the RX 6800 XT mostly matches the GeForce card in the standard benchmarks and only looks off the pace when it comes to the still-niche world of ray traced gaming. It's a monster of a card, a monster of a GPU, and proved that AMD was back challenging in high-end graphics cards once more.

There's no 'Silicon Valley' where Jacob grew up, but part of his home country is known as 'The Valleys' and can therefore it be easily confused for a happening place in the tech world. From there he graduated to professionally break things and then write about it for cash in the city of Bath, UK.