In April 2019, Blizzard shared some insights into how it was using machine learning to combat abusive chat (opens in new tab) in games like Overwatch. It's a very complicated process, obviously, but it appears to be working out: Blizzard president J. Allen Brack said in a new Fireside Chat video that it has resulted in an "incredible decrease" in toxic behavior.
"Part of having a good game experience is finding ways to ensure that all are welcome within the worlds, no matter their background or identity," Brack says in the video. "Something we've spoken about publicly a little bit in the past is our machine learning system that helps us verify player reports around offensive behavior and offensive language."
"This system has been in place in Overwatch and in Heroes of the Storm. It allows us to issue appropriate penalties quicker, and we've seen an incredible decrease not only in toxic text chat, but an overall decrease in re-offense rates. A few months ago, we expanded this system into World of Warcraft's public channels, and we've already seen a decrease in the time disruptive players stick around by half, and we're continuing to improve the speed and the accuracy of this system."
Blizzard has also recently increased the severity of penalties for bad behavior in Overwatch and added more flexible profanity filters, which offer three levels of "accepted language," each of which can be customized further.
"These are small steps, but they can add up to lasting change," Brack said. "Combating offensive behavior and encouraging inclusivity in all of our games and our workplaces will always be an ongoing effort for us."