Want to jump straight to the most important results of our testing? Check out our guides to the best gaming headsets (opens in new tab), the best wireless gaming headsets (opens in new tab) and the best headphones (opens in new tab).
When we set out last month to take on headsets, we wanted to do it in way that was more than your typical subjective user experience test. We wanted to remove good portion of the human aspect. This might seem counterintuitive considering an audio experience is highly subjective, but the fact is manufacturers use a great deal of engineering equipment to design and develop headsets and headphones.
That’s why we ended up getting ourselves a $50,000 testing platform. This platform is used by some of the most well respected engineering companies in the world. (You should read that article if you want to know how to read the graphs on the next few pages.)
Our system is made up of two primary components: the head and torso simulator (HATS), and the actual testing software. On the hardware side, Bruel & Kjaer supplied the HATS, which is widely used by military, automotive, and audio industries. The actual testing and analysis is performed using an industrial software known as . I’ve detailed both in my previous article leading up to this one, but I wanted to expand a bit on the work that was done with SoundCheck.
Developed with deep analysis in mind, SoundCheck is designed for engineers that need to fully control how tests are run. Essentially, you can think of SoundCheck as a state of the art analyser that’s fully programmable. I ended up spending a good amount of time with both documentation and Listen’s engineers to get trained on what certain tests say about a headset, and how to perform them.
If I was just testing only a handful of headsets, it would be easy. But going through roughly 60 units is daunting. Thankfully, SoundCheck allows a certain level of automation. I ended up creating my own test sequence in SoundCheck that would ask for a model name, perform various tests, output the charts, and save the results to file automatically. To test the microphones, another sequence was created that would sweep the mic, play a pre-defined WAV file, record the audio result, and save both the recorded input and chart to file with the correct model names.
By the time I was through testing, I realized I had barely scratched the surface of SoundCheck’s capabilities. For industrial engineering, Listen’s pricing of SoundCheck almost seems like a bargain for its capabilities. Still, it’s not something a home user would use, unless you’re an audio engineer in your day job.
After months of testing, we finally have results. I used both the test system and my own listening tests for each headset and headphone tested. In total, I spent roughly 4 hours per unit. For reference, the following tracks and albums were used:
- David Chesky, Chesky Records - Ultimate Demonstration Disk (album, HDTracks)
- Adele - When We Were Young (25)
- Sam Smith - Writing’s on the Wall (single)
- Rush - La Villa Strangiato (Time Machine, Live)
- Joe Hisaishi - Promise of the World (Howl’s Moving Castle soundtrack)
- Haley Reinhart - Can’t Help Falling In Love (single)
- Taylor Swift - Blank Space (1989)
- Above & Beyond - Sun & Moon (Group Therapy)
- Christian Thielemann, Wiener Philharmoniker - Beethoven Symphonies Nos. 4-6
- Jason Derulo - Talk Dirty, feat. 2 Chainz (Tattoos)
- Eagles - Hotel California (Hell Freezes Over, Live)
- Ray Charles - You Don’t Know Me (Genius Loves Company)
- Bingo Players - Rattle (Miami Mainstage Anthems)
- Rodrigo y Gabriela - Hanuman (11:11)
All track and album sources are in lossless FLAC format, at either 24-bit/192kHz, 24-bit/96kHz, or 16-bit/44.1kHz. As you may have noticed, I chose a variety of genres and I listened to each headphone across 3 different headphone amplifier and DACs:
For our headsets, we scored each out of 10 for audio, and comfort, with a score of 5 being average.
For microphone tests, we used a reference WAV file recorded with an AKG P220 professional condenser microphone. The WAV file was then imported into a SoundCheck sequence where it played through the mouth simulator on the HATS. A recording of that output was then saved for each headphone to compare clarity, distortion and response.
We measured microphone response out of 10, with a score of 5 being average.