Every few years there's a renewed effort to bring immersive, meaningful haptics to the forefront of gaming. Just look at the DualSense, it's another step-up in a journey that's taken us from a Rumble Pak to full-body haptic vests. But so far many of these solutions have relied on pre-programmed or auditory response haptics, which according to a group of researchers from Nvidia, could be made even more dynamic and flexible with a little thing it likes to call machine learning.
Yes, Nvidia never fails to find another use for machine learning.
In a recently published patent, originally filed in September 2019, a team of researchers at Nvidia put forward a different approach for generating accurate haptics with machine learning. They believe an intelligent algorithm could learn to detect specific "features" in content, such as games, and then produce a fitting haptic response in whatever hardware it was hooked up to.
As far as uses for machine learning in gaming are concerned, this one sounds pretty darn great, at least if you ask me.
Here's the abstract from the paper from Albright et al:
"Haptic effects have long been provided to enhance content, such as by providing vibrations, rumbles, etc. in a remote controller or other device being used by a user while watching or listening to the content. To date, haptic effects have either been provided by programming controls for the haptic effects within the content itself, or by providing an interface to audio that simply maps certain haptic effects with certain audio frequencies. The present disclosure provides a haptic control interface that intelligently induces haptic effects for content, in particular by using machine learning to detect specific features in content and then induce certain haptic effects for those features."
The patent is a little light on the specifics, as they are often wont to be. The haptic control interface, as it's referred to in the paper, could operate with custom circuitry, your CPU or GPU, or some combination of hardware and software.
The door's very much open to a wide range of devices, as per the paper's rough vision of its possible uses. That includes wired and wireless units, and haptic interfaces located locally and those up in the cloud. There could be one haptic device or several, even, and it could be built to deal with different content sources, such as games and movies without further training.
The initial haptic control interface, whatever form it may take, would require some preliminary training to get up to speed, such as video images, objects, and audio signals. From there it would pick up the rest on the fly, and without prior knowledge of the game or movie at hand.
"The haptic effects may be predefined to correspond with the feature," the paper says, "such as a particular haptic effect for the gunshot. The haptic control interface then causes the remote controller to provide the determined haptic effect(s), thus coordinating the haptic effect(s) experienced by the user with the gunshot experienced by the user within the video game."
As with many up-and-coming machine learning concepts in gaming, the reality may differ somewhat from the initial concept. But I have to say of all the machine learning uses out there, clever haptics feels like a solid bet to actually make it into our gaming rigs one day.