NVIDIA launches GeForce GTX 670, Zotac's AMP! Reviewed

geforce 670 review 1

Purveyor of graphics chips NVIDIA has followed up on the launch of its GeForce GTX 680 with the second card in the 6-series, the GeForce GTX 670. On the face of it, the two cards are almost identical: both are based on the same Kepler design GK104 chip, with 2GB of GDDR5 memory running on a 256bit bandwidth bus.

That GK104 is, in both cases, produced on the same 28nm process, and the only real difference is that the GTX 670 has 1344 CUDA cores activated compared to the GTX 680's 1536. Plus, of course, it's a bit cheaper. Since we decided the GTX 680 was good but overkill for most people's needs, is the 670 a better bet?

To find out, I've had both an NVIDIA reference card and Zotac's AMP! Edition in the office to test. Pricewise, stock GTX 680s go for well over £400, while the GTX 670 has an RRP of £329/$399. Zotac's GTX 670 AMP! Edition, meanwhile, is £30 more expensive than a stock 670, at £359. Pricey, but if it can equal the performance of a full GTX 680, potentially not terrible value.

The Zotac card has a base clockspeed of 1098MHz compared to 915MHz for the stock GTX 670, and an exceptionally attractive customised cooler with jutting copper heatpipes and an all metal cover that makes me want to build a steampunk case to put it in.

That heatsink is also huge, so the card takes up three slots in total, although it is pretty quiet, all things considered.

To recap the main features of Kepler, it's the first GPU that includes the ability to overclock itself within certain thermal limits, using technology not dissimilar to Intel's Turbo mode for its CPUs. The drivers are flexible too, using the manufacturer supplied tools you can overclock the card manually or set the turbo mode to be more or less aggressive depending on your proclivities. Or, it can be set for a frame rate target, limiting the maximum output if you're running older games and it makes no sense to hit 200-odd frames per second.

Compared to its predecessor Fermi, which powered the GeForce 5-series, it has a lot more shader cores but running at slightly slower speeds.

The GK104 itself is based around eight 'SMX' units, each of which has 192 CUDA cores and eight texture units. The GTX 670 is a slightly cut down version with just seven of the SMX units turned on and one turned off – a common practice among CPU and GPU vendors to make use of chips that don't quite make the grade for selling as full variants.

Given NVIDIA's history in the area, it's unlikely you'll be able to find a way to unlock the disabled SMX unit.

You can, however, use it to run up to four monitors simultaneously and it has outputs for DisplayPort 1.2 and HDMI 1.4a, as well as two dual link DVI ports.

Enough of such dry technical details, though. How, I hear you cry, does the GTX 670 perform? I've compared to NVIDIA's previous flagship, the GTX 580, and the card that currently graces the PC Gamer Rig, AMD's Radeon HD7850 (which is some £160 cheaper). I'm not going to overwhelm you with benchmarks (as always, there are plenty of other sites for that) but here's a few that'll give you the answer:

I'll be looking at other resolutions and different benchmarks in a full round up for the mag, but as you can see, the most striking thing about the GTX 670 is that it'll really piss off anyone who went out and bought a GTX 680 recently. It's not fair to say that the disabled SMX makes no difference at all to real world performance, and on multiple screens the gap between the two cards is more pronounced, but doesn't seem to have a lot of practical benefits. The overclocked GTX 670 AMP! Card is every bit as good as, and in some cases better, than a GTX 680. Especially when it comes to power consumption. Whoops.

Basically, if you want the performance of a GeForce GTX 680, grab yourself an overclocked GTX 670 instead.

It's interesting because compared to the last generation, almost twice as much of the high end card has been disabled to create the lower tiered version – a GTX 570 had just one of the GTX 580's 16 processing blocks turned off, compared to one of eight here. But the vast number of CUDA cores that remain, and their ability to self overclock, means the gap is much smaller.

Overall, though, nothing changes my opinion from the GTX 680 launch. Like that, the GTX 670 is a really well designed and impressive card, but I'd want to know what NVIDIA is doing at the £200 price point before I'd buy. Because unless you're gaming on two or three monitors, it's hard to justify spending more than you would on an HD7850.