20,000 toxic CS:GO players banned in six weeks by FACEIT and Google's new chat AI
Minerva AI issued 90,000 warnings after monitoring in-game chat.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Every Friday
GamesRadar+
Your weekly update on everything you could ever want to know about the games you already love, games we know you're going to love in the near future, and tales from the communities that surround them.
Every Thursday
GTA 6 O'clock
Our special GTA 6 newsletter, with breaking news, insider info, and rumor analysis from the award-winning GTA 6 O'clock experts.
Every Friday
Knowledge
From the creators of Edge: A weekly videogame industry newsletter with analysis from expert writers, guidance from professionals, and insight into what's on the horizon.
Every Thursday
The Setup
Hardware nerds unite, sign up to our free tech newsletter for a weekly digest of the hottest new tech, the latest gadgets on the test bench, and much more.
Every Wednesday
Switch 2 Spotlight
Sign up to our new Switch 2 newsletter, where we bring you the latest talking points on Nintendo's new console each week, bring you up to date on the news, and recommend what games to play.
Every Saturday
The Watchlist
Subscribe for a weekly digest of the movie and TV news that matters, direct to your inbox. From first-look trailers, interviews, reviews and explainers, we've got you covered.
Once a month
SFX
Get sneak previews, exclusive competitions and details of special events each month!
A new AI built to combat toxicity in online gaming has banned 20,000 Counter-Strike: Global Offensive players within its first six weeks, solely by analyzing messages in the game's text chat.
The AI is called Minerva, and it's built by a team at online gaming platform FACEIT—which organised 2018's CS:GO London Major—in collaboration with Google Cloud and Jigsaw, a Google tech incubator. Minerva started examining CS:GO chat messages in late August, and in the first month-and-a-half marked 7,000,000 messages as toxic, issued 90,000 warnings, and banned 20,000 players.
The AI, trained through machine learning, first issued a warning for verbal abuse if it perceived a toxic message, while also flagging spam messages. Within a few seconds of a match finishing, Minerva sent notifications of either a warning or a ban to the offending player, and punishments grew harsher for repeat offenders.
The number of toxic messages reduced from by 20% between August and September while the AI was in use, and the number of unique players sending toxic messages dropped by 8%.
The trial started after "months" of eliminating false positives, and it's only the first step in rolling out Minerva to online games. "In-game chat detection is only the first and most simplistic of the applications of Minerva and more of a case study that serves as a first step toward our vision for this AI," FACEIT said in a blog post. "We’re really excited about this foundation as it represents a strong base that will allow us to improve Minerva until we finally detect and address all kinds of abusive behaviors in real-time."
"In the coming weeks we will announce new systems that will support Minerva in her training."
Thanks, PCGamesN.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
Samuel is a freelance journalist and editor who first wrote for PC Gamer nearly a decade ago. Since then he's had stints as a VR specialist, mouse reviewer, and previewer of promising indie games, and is now regularly writing about Fortnite. What he loves most is longer form, interview-led reporting, whether that's Ken Levine on the one phone call that saved his studio, Tim Schafer on a milkman joke that inspired Psychonauts' best level, or historians on what Anno 1800 gets wrong about colonialism. He's based in London.


