During an Nvidia event held today, Nvidia CEO Jen-Hsun Huang discussed a new feature that’ll supposedly make your amazing, video game-related exploits all the more believable to your dubious friends: ShadowPlay.
The standard display refresh rate is 60Hz—that's 60 images per second—but fancy GPUs can render way more than 60 frames per second. We like more frames. More frames means more responsive input—and screw compromise!—but when out-of-sync rendering traps multiple frames in a single refresh, the Horrible One emerges: screen tearing. The best we can do now is tame the beast with V-sync, but in Montreal today, Nvidia unsheathed a new weapon which it claims will put tearing and stuttering down for good.
Even as Valve is trying to ease access to PC gaming in the living room, its plans for the Steam Machine won't be held up by an adherence to a single manufacturer of graphics hardware. The proposed SteamOS-based systems will support a variety of graphics builds with GPUs from AMD, Intel, and Nvidia when they launch next year, according to a report at Maximum PC.
Update: Well, that didn't take long. Activision's support Twitter account has just confirmed that these specs are not official. Original story follows inside.
While it's not official, the likely PC requirements for Call of Duty: Ghosts have been posted on Nvidia's website. The minimum requirements are pretty friendly to those without giant rigs, but a slight step up from previous CoDs given the transition to new console hardware.
Nvidia is suddenly all over the news this week, announcing that it's working with the new SteamOS and boasting about how much more powerful PCs are than consoles. Given that Nvidia was skipped over for the Xbox One and PlayStation 4, it's understandable that it wants PC gamers to know it loves us very, very much.
This week, it seems, everything is coming up Linux. First Valve announce their own Linux-based OS, and now, Nvidia are making moves to get more involved with the open source community. Nvidia's Andy Ritger contacted the developers of Nouveau - an open source, reverse-engineered version of Nvidia's proprietary drivers - offering information on the workings of their GPUs.
Remember Microsoft making some noises about the Xbox One’s cloud-rendering power? To somewhat offset the fact they’re jamming a weaker GPU into their gaming slab than Sony is with the PlayStation 4, Microsoft is employing 300,000 servers to bolster processing of “latency-insensitive computation”. And now Nvidia has just announced CloudLight, something which sounds more than a little bit similar.
Nvidia released a technical report on CloudLight on their website outlining what it could mean for games. They call it a system “for computing indirect lighting in the Cloud to support real-time rendering for interactive 3D applications on a user’s local device.” In practice, this means Nvidia can use GeForce GRID servers to compute a game engine's global illumination to ease the load on your gaming device of choice.
Nvidia upped the clocks on its cut down GK 104 and EVGA decided to go one better with the GTX 760 Superclocked. But with the reference card’s GPU already hauling as much gaming load as its silicon can handle, is there any point in a more expensive overclocked version?
We’ve already seen the standard Nvidia GeForce GTX 760 today, and there will likely be many reference versions hitting the shelves as I type, but there will also be a slew of these factory-overclocked cards. Is it worth the extra cost? Let's find out.
Component manufacturers love the bombastic use of military speak and the double whammy of GTX 700 series releases from Nvidia certainly have something of the shock-and-awe about them. This latest card, the Nvidia GeForce GTX 760 is no different, and sets to the middle order of AMD’s competing Radeon graphics cards.
It’s rare though that Nvidia and AMD don’t decide to launch their new graphics generations - whether they’re whole new architectures or range refreshers - at around the same time. Generally there’s only a few months between them at worst, with Nvidia normally the ones turning up late to the party, blaming the traffic on the way over or difficulties in hitting decent yields with new process nodes.
This time though it’s Nvidia who are the first to arrive, eagerly clutching their new silicon, with AMD kicking their heels back in Texas. But this apparently is not a delay, AMD have decided they are going to stick with their current range of HD 7000 GPUs until the end of the year, so confident are they in their existing cards. I’ve got to believe though somewhere there are some AMD Radeon execs who are sweating just a little more now.
We’ve known AMD are the go-to guys for next-gen console silicon for a good while now. The tech press has been speculating since the consoles’ specs were first announced as to how the PC could benefit from Sony and Microsoft opting for the x86, and specifically AMD, architecture. After all the Xbox 360 was running AMD graphics hardware and, from my perspective, the benefits to the PC from that relationship are pretty intangible at best. There are signs that things may be different this time around.
"The consoles are really the target for a lot of the game developers, if it’s a Radeon heart powering that console, like the PS4 or Xbox 360, that means these games devs are going to be designing their games, designing their features and really optimising for that Radeon heart" said AMD's Devon Nekechuk around the launch of the Radeon HD 7990. But why, specifically, will that be the case? I asked AMD's worldwide manager of ISV gaming engineering, Nicolas Thibieroz for the nitty gritty.
Nvidia did some well-deserved strutting at E3 yesterday, showing off the superiority of the PC as a gaming platform. With charts and graphs and maybe just a teeny tiny hint of bitterness that AMD processors are powering both the PS4 and the Xbox One, Nvidia’s Tony Tamasi told the room that “the PC is the most powerful gaming platform out there.”
The new consoles have the spotlight at E3 2013 this year, but what will the expo's many reveals, demos, hardware rollouts, and buzzwords mean for the PC? Is this even a show for us at all, with the focus on the brick and mortar retail market? We discuss the implications, and speculate on which of the big, all-star console titles will eventually make it to our corner of the gaming universe.
Nvidia have just launched their latest high-end graphics card, the GeForce GTX 780, and an impressively quick, but expensive card it is too. Alongside that we’ll also be getting some interesting updates to the GeForce Experience as well, including the intriguing ShadowPlay feature.
GFE is about to become an opt-in component of the Nvidia’s driver downloads, and given that it’s already had around 2.5 million downloads in its beta form already those numbers are likely to get bigger.
And that means it’s only going to get better and more reliable too.
Just as we were warming up for Intel's Haswell CPU, Nvidia go and drop a whole new generation of graphics cards on our laps, with the Nvidia GeForce GTX 780 at the vanguard.
Well, when I say "whole new" that needs to be qualified just a touch. Nvidia haven’t suddenly forgotten all those roadmaps they've been showing off. This isn't one of the new Maxwell GPUs. The GTX 700 generation is essentially a refresh with the same basic graphical technology that we saw in the GTX 600 series, though the GTX 780 is itself a bit of an exception to that. It's actually taking the Titan's GK 110 GPU and shaving a little power off the top, giving us a card that performs slightly below Nvidia's state-of-the-art monster at a lower price point, ostensibly making the GTX 780 the oft-rumoured GTX Titan LE.
An Nvidia spokesperson pitched it as being "for everyone who loved Titan but couldn't stump up to an £800 graphics card." Does that claim stand up?
Nvidia’s dual-GPU behemoth, the GeForce GTX 690, has been out in the wild for over a year now, but the equally freakishly expensive GTX Titan has outsold it in less than three months. Now either the GTX 690 sold a pifflingly small amount (quite possibly) or the GTX Titan has been better received than even Nvidia thought.
According to Nvidia, it’s the latter.
Getting faces across the uncanny valley is one of the loftier challenges facing modern day graphics folk, but they're slowly getting there. Shirking the age old tradition of using wrinkly old men in CG facial expression demos, Nvidia has opted instead for a middle-aged bald guy. Middle-aged bald guys with lots of feelings are usually to be avoided, but this is different. 'Digital Ira', according to Nvidia, "represents a big leap forward in capturing and rendering human facial expression in real time, and gives us a glimpse of the realism we can look forward to in our favorite game characters."
As a techie person and all-round good egg, people often ask me for advice and assistance putting gaming systems together. And more often than you might think I get asked specifically about building multi-GPU setups. Normally I’d scoff, put on my best smug face and patronise them mercilessly.
“Whatever you’re going to spend on a multi-GPU array,” I’d say, “go and spend that on the fastest single-GPU graphics card you can afford. You’ll thank me in the end.”
And then Nvidia go and release the GTX 650 Ti Boost, immediately putting that received wisdom into question.
I really, really want to love this diminutive new card from Asus. It’s fantastically well engineered and at once quicker, quieter and cooler than the reference design card from Nvidia itself. It also fits in with the recent trend of squeezing top-end gaming performance down into mini-ITX form factors.
A total graphics win, you’d have to say. Right?
Asus have unveiled their latest effort to squeeze performance components into a miniscule form factor. The diminutive GTX 670 DirectCU Mini has just landed on my desk and what they say is true; size doesn’t matter.
This card is a fully-fledged GTX 670 card measuring just 170mm tip to tail compared with just under 250mm for the reference version. But there’s no hint of compromise in order to squeeze this sort of performance into a pint-sized card, in fact Asus have managed to overclock the DirectCU Mini too.