In a lengthy update on the Valve Time blog, Michael Abrash has offered his view on the future of augmented and virtual reality technology. It's a comprehensive and clear-headed look at the field that does a lot to clear up terminology and set the stage for future discussion.
Abrash establishes a distinction between 'constrained' and 'unconstrained' augmented reality, where the former takes place in a single location and the latter follows the user as they move through the world. He also suggests that limited heads up display functionality - such as Google's Glass project - be treated as a separate class entirely, given that it's markedly less ambitious than full, go-anywhere augmented vision. 'HUDSpace' tech, Abrash argues, is an "extension of smartphones" rather than full AR - and "way less cool" as a result.
Augmented reality is the future, Abrash argues, but he also presents a range of reasons why VR tech like the Oculus Rift might become a bigger part of our lives in the short term.
Virtual reality is both self-contained and used in a single location, which solves the problem of power drain that limits the viability of proper augmented reality - if you're sitting in your room with a headset on, you don't need to lug a massive battery around with you. This also makes it easier to tie to motion-tracking technology. It's also less important that a VR headset looks cool - whereas a augmented reality headset that makes you look like a member of supermarket own-brand Daft Punk isn't likely to win over the lifestyle gadget crowd.
Abrash points out that we're currently closer to virtual reality not only in technology but in how we think about technology. There are plenty of immediately graspable uses for a VR headset - flying a plane, driving a car, piloting a mech - that help software and hardware designers get a handle on what the desired experience is. Augmented reality doesn't have a real-world analogue in the same way - which makes it both more exciting and more difficult for developers. He also suggests that an awareness of real life is only of limited use in computing and entertainment. "The real world often doesn't play an important role in watching TV or movies, or playing video games," Abrash explains. "Certainly it does when you're with friends, but when you're alone, the real world doesn't particularly enhance the experience."
Valve's own VR headset prototype, as demonstrated to the New York Times . The camera suggests that it uses 'passthrough' video.
Further down the line, Abrash suggests that augmented reality headsets that can also function as VR devices - shutting off reality to present a simulation - could offer the best of both. But to get there, the hurdles that face proper augmented reality of any kind need to be crossed, and practical virtual reality is more likely to see the light of day in the short term. In the second part of the blog series, Abrash will going into more detail about why AR is the future in the long run.
An interesting point is made in the comments, where Adam Dane suggests "some sort of non-invasive BCI (Brain-Computer Interface)" for controlling an AR display. "It would be a good candidate," Abrash responds, "if any of that technology worked well enough right now. Unfortunately, our investigations found that that's not the case yet."
"Our investigations"? This probably just means that the team have experimented with existing brain interface tech and found it lacking, but this wouldn't be an article on Valve without wild speculation, so: are Valve building a mind-control helmet? Has it malfunctioned? Is Half-life 3 delayed while Gabe's security forces attempt to contain renegade AI-controlled R&D testers in the depths of Valve's headquarters? For answers to all of these questions, spraypaint 'PROBABLY NOT' on your monitor and then go for a lie down.