Microsoft has a simple challenge—expand the image above and see if you can spot a difference in image quality between the two halves. You probably can't. That's a good thing, because one is using a special rendering technique that improves performance.
In this case, there's a 14 percent performance difference, courtesy of Variable Rate Shading (VRS). Microsoft is adding VRS to its DirectX 12 API and is awfully excited about the implications. The excitement is warranted, if real-world results mirror Microsoft's example. So, what's going on here? In Microsoft's words (opens in new tab), VRS is a "powerful new API that gives developers the ability to use GPUs more intelligently."
"For each pixel in a screen, shaders are called to calculate the color this pixel should be. Shading rate refers to the resolution at which these shaders are called (which is different from the overall screen resolution). A higher shading rate means more visual fidelity, but more GPU cost; a lower shading rate means the opposite: lower visual fidelity that comes at a lower GPU cost," Microsoft explains.
Not all pixels are created equal, however, so it's a bit of a waste to dedicate a GPU's horsepower to apply a single shading rate to every single pixel in a frame. That's where VRS comes into play. Developers can selectively reduce the shading rate in areas of the frame where it will have a minimal impact on visual quality. For example, a flat water texture won't be affected by reducing the shading rate, whereas small and detailed units (in Civilization 6) need a higher shading rate.
If all of this sounds familiar, it's because Nvidia came up with the idea and added the feature to its Turing architecture GPUs last year. That was done via extensions to DirectX 12 and Vulkan, but now Microsoft is officially accepting VRS into the main branch of DirectX 12. Wolfenstein II: The New World Order has had VRS available for Turing GPUs for many months, after working with Nvidia to add the feature, but more games should see support in the coming years.
Microsoft's says VRS can be applied to where it won't affect the visual quality at all. Depending on the implementation, that may not be true, but the goal would be to use VRS in such a way that it's not possible to spot the difference.
Microsoft gets into the nuts and bolts of the technical details in a blog post (opens in new tab), but the takeaway is that VRS can potentially improve performance in a way that generally would otherwise require faster hardware.
"This is really exciting, because extra perf means increased framerates and lower-spec’d hardware being able to run better games than ever before," Microsoft says.
Microsoft partnered with Firaxis to see what could be done on hardware that exists today, and specifically a GeForce RTX 2060 graphics card. In the image above, Firaxis applied a lower 2x2 shading rate to terrain and water, and higher 1x1 shading rate to smaller assets, such as vehicles, buildings, and the UI.
While it's hard to tell a difference in the two halves, the right half is running around 20 percent faster with VRS turned on. This is representative of a Tier 1 implementation of VRS. It's a simplistic approach and not necessarily ideal, but it has less overhead.
Developers can also implement a Tier 2 version of VRS, which yields better image quality while still improving performance, just not as much. Tier 2 adds an edge detection filter to work out where high detail is required, and then sets a screenspace image. That is what you see in the top image (VRS was applied to the left half).
Microsoft is touting broad support for VRS, though only Nvidia RTX and GTX 1660 hardware are currently capable of using it. Intel's upcoming Gen11 graphics hardware will also support VRS, which is due out this year with the Ice Lake processors. It's not clear when AMD hardware will support VRS, but as this leverages hardware features it will require an updated architecture (meaning Navi at the earliest).