We're all about accessible tech here at PC Gamer. Anything that can reduce feelings of isolation after a long couple of years is a welcome augmentation, if you ask me. So, it's little wonder Tom Pritsky's gone viral on TikTok with a little AR device that'll transcribe your conversations in real time. Real-life closed captions, if you will.
Pritsky, along with co-founder Madhav Lavakare, formed TranscribeGlass back in 2021 so deaf and hard of hearing people wouldn't have to lip read.
Touting a similar feature to that which ye olde Google Glass had planned to bring us, TranscribeGlass instead adds a little augmented reality attachment to the side of an ordinary pair of glasses that not only transcribes the words being spoken by the person in front of you, but manages to cleverly ignore surrounding conversations that could otherwise confuse the transcription.
Pritsky's goal is a bold one: "To solve hearing loss." He was the founder of "Stanford's first club devoted to hearing loss advocacy," and it's clear his passion for tech has grown alongside the endeavour throughout his degree, now converging here with TranscribeGlass.
Speaking to Jason Carman of Saturday Startup Stories, he says that "even if you gave someone the perfect hearing aid, the broken hearing system cant resolve that audio, and it sounds super blurry and hard to understand."
Students at Stanford University developed glasses that transcribe speech in real-time for deaf people. Amazing. The product is called TranscribeGlass. pic.twitter.com/uvXVOU7czdJuly 27, 2023
Skirting that issue altogether, his and Lavakare's design simply presents the wearer with the transcription on the side of the screen, so they can concentrate on the conversation while still being able to look at the person they're speaking to.
I must say the size of the screen is a little small, with some longer words splitting into two separate lines. Hopefully you can eventually change the font style, as well.
What's really great about the project is that you can integrate it with anything. "Our goal is to be source agnostic", says Pritsky. "We can integrate any API: Google Speech, Deepgram, Microsoft".
Open source and highly accessible, the final model is expected to cost around $95, and the tech just gets better every time I spot it.
This is certainly one to keep an eye on if you're the kind of person who gets excited about things like haptic suits giving deaf concertgoers a way to experience music.