Dynamic Super Resolution
Where MFAA is all about new efficient algorithms to reduce jaggies, Dynamic Super Resolution is a brute-force method to make games look better, by rendering the entire image at a higher resolution and then downsampling it to your monitor's native output. It's Nvidia's version of the supersampling technique mentioned above, and it's actually been possible for years, just not as conveniently.
Check out my separate article on Dynamic Super Resolution, and how it's just downsampling with a fancy name, now built into Nvidia's GeForce Experience.
Expect to start using it: sometime after launch. DSR is exclusive to Maxwell at launch, but Nvidia says it's likely to roll out to older graphics cards as part of GeForce Experience.
Voxel Global Illumination
Of all the technologies Nvidia talked about at Editor's Day, Voxel Global Illumination got the most stage time. It's a dynamic global lighting solution that Nvidia has been working on for years, and it's currently being integrated into Unreal Engine 4 (and other unnamed "major" game engines) for use by the end of the year. Jonah Alben called lighting "the great unsolved problem in graphics," and said "VXGI was our biggest dream going into the graphics for this generation. We couldn't just do it with software. We also had to do things with hardware to enable that."
Nvidia calls VXGI's voxel cone tracing the next step in game lighting thanks to its reflections and support for indirect, diffuse lighting. Nvidia's Tony Tamasi elaborated that VXGI doesn't require light baking, like most lighting engines. "Traditional games will pre-compute or do a bunch of rendering with a raytracer or pather and store a bunch of lighting information for a scene," he said. "You can't possibly pre-compute lighting for Minecraft because you don't know what someone's going to build. [VXGI] can be truly dynamic and we can re-compute it."
Most of the presentation was too technical to easily convey, but Nvidia's white paper on Maxwell helps break it down:
"To perform global illumination, we need to understand the light emitting from all of the objects in the scene, not just the direct lights. To accomplish this, we dice the entire 3D space of the scene in all three dimensions, into small cubes called voxels...In VXGI, we store two pieces of key information in each voxel: (a) the fraction of the voxel that contains an actual object, and (b) for any voxel that contains an object, the properties of the light coming from that object (i.e. bouncing off of it from primary light sources), including direction and intensity.
"...We store information into each voxel describing how the physical geometry will respond to light...The next and final step is to rasterize the scene. This step is largely the same as it would be for a scene rendered with other lighting approaches; the main difference is that the final rasterization and lighting now has a new and more powerful data structure ( the voxel data structure) that is can use in its lighting calculations, along with other structures such as shadow maps.
"This approach of calculating indirect lighting during the final rendering pass of VXGI is called cone tracing. Cone tracing is an approximation of the effect of secondary rays that are used in ray tracing methods. Using cones results in very realistic approximations of global illumination at a much lower computational cost than standard ray tracing."
The main takeaway is that Nvidia has invested a ton of engineering work into global illumination, and we'll hopefully see the payoff from that in Unreal 4 and other engines starting in 2015.
Expect to start using it: In 2015 or 2016, after game engines begin to support it. VXGI won't be available on older Nvidia hardware.
Latency reduction for the Oculus Rift
One last exciting bit of R&D Nvidia talked about at Editor's Day: improvements they're making with Maxwell to reduce latency for virtual reality. Rendering games for the Oculus Rift is seriously performance intensive—the image has to be rendered twice, once for each eye, and hit a minimum framerate of around 75 fps. Oculus wants to push both framerate and resolution higher before they release a consumer product. Latency is also a huge obstacle, as any perceived delay between movement and game reaction will break immersion and/or make you nauseous.
Nvidia think they can cut VR latency down from 50 ms to about 25 ms with a few techniques. One is MFAA, which is more efficient than MSAA—of course, their figures don't take into account disabling AA altogether. Another is working on zero-latency SLI, which will be great for all dual-GPU users, not just those playing on the Oculus Rift. Nvidia didn't say when that improvement to SLI would be available.
The third improvement was asynchronous or just-in-time warp. "The ideal scenario is that we can sample head tracking right before you see it," said Nvidia's Tony Tamasi. "We've developed a technique to asynchronously be sampling the head tracking input out-of-band with the GPU input, re-warp it, re-project it, and then display it." More simply: sampling for head tracking won't be slowed down by all the tasks in the GPU pipeline.
Expect to start using it: when the consumer Oculus Rift arrives. Let's hope that's 2015.