'It all started with a missed beer': How Riot and Ubisoft began a new project to prevent harmful comms

Phoenix from Valorant and Ashe from R6S
(Image credit: Riot Games / Ubisoft)

Ubisoft and Riot Games have announced a partnership to combat toxicity in their games. The two developers have come together to work on a new AI-based solution by sharing data between them to find a way to mitigate and prevent toxicity and online abuse. The Zero Harm in Comms partnership is a way for the two huge publishers with online games such as League of Legends, Valorant, and Rainbow Six Siege to share the data they discover and, hopefully, develop a tool that will be far more effective than current methods at identifying harmful communications. 

I spoke to the two leads, Yves Jacquier, executive director at Ubisoft La Forge and Wesley Kerr, head of technology research at Riot Games about the partnership, what it hopes to achieve and what players could expect from the reality of these changes to games. 

(Image credit: Riot Games)

PC Gamer: Technologically, what's the difficulty between sharing this information between Riot and Ubisoft then perhaps other publishers too?

Yves Jacquier, executive director, Ubisoft La Forge: When you want to create such a project of detecting harmful content in chats, you have two aspects. First, you have the data that you need to rely on to train new AI algorithms. And then you need to work on those AI algorithms. Both topics are pretty different, but are extremely complex. The data question is extremely complex because if you want an AI to be reliable you need to show it a lot of examples of certain types of behaviours, so that it can generalise when it sees some new text lines, for example. But to do that, to have this quantity of data, we feel that we can't do that alone. And thus, we had discussions with Wes and had this idea of collaboration. The first difficulty is sharing data while also preserving the privacy and confidentiality for our players. And then resonate and be compliant with all the rules and regulations such as European GDPR. 

Do you intend to bring other publishers other than Ubisoft and Riot Games into the project? 

Jacquier: Well, this is a pilot, our objective is to create a blueprint, but not create that as a mere recommendation. We want to do that together and face difficulties and challenges together and then share our learnings and share this blueprint to the rest of the industry. 

Wesley Kerr, head of technology research at Riot Games: That blueprint is going to be critical for how we think about onboarding new people, if we do in the future years. I think there is a general hope, as we've seen with the Fairplay Alliance of industry coming together to help tackle these big challenging problems. And so this is our approach to start to find a path to start to share data to really take a crack at this. 

(Image credit: Ubisoft)

There are reservations fans have about having their comms recorded and used to track their behaviours in games. Can you talk a little bit about privacy and the anonymized in-game data and how it's anonymous?

Jacquier: We are working on that, this is really the first step. So, unfortunately, we're not able to publish the blueprint. However, what I can already share with you is that we are working with specialists just to make sure that we are compliant with the rules and regulations with the higher constraints, such as GDPR. It's going well, but still, we're not able to explain in detail what it'll look like at the moment. It's a commitment, though, that we have in terms of sharing our learnings when the project is over, which is this summer hopefully.

Kerr: I would add that we do believe we should collect and share the absolute minimum amount of data to effectively do this. So, we're not looking to gather way more than we need in order to solve this. And we're hoping to remove all PII [personally identifiable information] and confidential information from these datasets before we share them.

Is there a timeline for when this technology would come into play?

Jacquier: This is a really tough question, because what we're focusing on now is an R&D project. It started back in July. We decided to try it for one year, just to give [us] enough time. So what we want to do is work on this sharing data blueprint, and then be able to work on algorithms on top of that, and see how reliable those algorithms can be. 

When you evaluate how reliable an algorithm is, you need to do two things. First, check what percentage of the harmful content it is able to detect, but you don't want to have too many false positives either. Most of the time, it's a trade-off between the two. So before knowing exactly how this tool will be applicable, and when players will be able to see a difference because of this tool we need first to evaluate exactly what are the strengths and limits of such an approach. I also want to add that first, it's a long-term project. It's extremely complex. So we see that as a first step as a pilot. It's one tool in the toolbox, as both Ubisoft and Riot have many tools to maximise player safety.

(Image credit: Riot Games)

The traditional tools in this area are essentially based on dictionaries, which is very unreliable, because it's extremely easy to bypass.

Yves Jacquier

How disruptive is disruptive? What behaviour is this aiming to mitigate? 

Kerr: I think here we're following the lead of the Fairplay Alliance with which both Ubisoft and Riot are members of and are core contributors. They've laid out a framework for disruptive behaviour, especially in comms, and have a set of categories that we're coming to align and make sure that our labels match up on so that when we do share data we are calling the same disruptive behaviours, it's the same things. That said, I can't enumerate all of them right now but it has things in it, like hate speech, and grooming behaviours, and some other things that really don't belong in our game. And we work to sort of make sure that we're better at detecting those and removing them from players' experiences.

Jacquier: And also, keep in mind that when we're talking about disruptive behaviours, for the moment we're trying to tackle one aspect, which is text chat. It's already an incredibly complex problem. The traditional tools in this area are essentially based on dictionaries, which is very unreliable, because it's extremely easy to bypass. Just removing profanities has been proven to not work. So, the difficulty here is to try an approach where we are able to make sense of those chat lines, meaning that we're able to understand the context as well. 

If for example, in a competitive shooter, someone says: "I'm coming to take you out" it might be acceptable as being part of the fantasy while in other contexts in other games it could be considered as a threat. So really, we want to focus on that as a first step. We're already ambitious but we have to acknowledge it's one aspect of disruptive behaviour or disruptive content we're focusing on. 

(Image credit: Ubisoft)

Why now? Is this partnership coming out of an increasing need to mitigate these situations or that the level of safety on online platforms has always needed to be regulated better?

Jaquier: It's probably a mix of all of that. Technology and research have recently made a lot of progress, especially in terms of natural language processing which is the specific AI domain to understand natural language and try to make a prediction or understand the meaning and intention of it. There's been tremendous progress so things today are possible that simply would not have been feasible, or we couldn't even have imagined being feasible, a few years ago. 

Second, I think that there is a realisation from the entire industry, but not only the gaming industry, that we need to be better collectively, to provide a safe space. It's online, but it's not only online. I mean, the online aspect only reflects one aspect. So today there's a realisation that it's a deep and difficult topic. We've developed the maturity to tackle this kind of issue. It's being able to trust each other, Ubisoft and Riot, enough to say that we're gonna share data, we're gonna share challenges together, and we will try to tackle this together. And having the tools and means to do that it's probably the perfect alignment. 

What we want to reach is a situation where any player from any culture, from any age, from any background, in any game has a safe experience.

Yves Jacquier

One of the words used in the brief was "preemptive". What is preemptive in this circumstance? The banning of a player as they progressively get more toxic or just removal of messages before they happen?

Jacquier: What we want to reach is a situation where any player from any culture, from any age, from any background, in any game has a safe experience. That's really what we want to aim for. How we get there, there's no silver bullet. It's a mix of many different tools. We count on the community, we count on promoting positive play, we count on the supporting team, customer support and everything. And we count on such prototypes. Not, talking only about the prototype, it all falls down to what will be the results will be reliable enough to simply delete a line because we're confident enough that it doesn't work and tag the player with whatever rules. We don't know yet. It's way too soon, what we want you to do is to make the tool, a tool that is as reliable as possible and then see what's the best usage of this tool in the entire toolbox.

Kerr: Yeah, I think that's exactly it and want to double down on it is that the outcome of this is we're able to detect these things far better. How we or how our product teams choose to integrate that into the system to protect the players, they'll work on different features and teams. But I think using the AI as a super strong signal that they can trust and rely on to actually take action is going to be the key to being preemptive. 

(Image credit: Ubisoft)

Two competing publishers working on something like this together is unusual. How did this project even start?

Jacquier: It all started with a missed beer, I have to admit. Because Wes and I work in similar areas for respective companies, which is research and development, we had a couple of discussions in the past to see, you know, "how is it going", "how do you address that", "what your difficulties" and regularly touched bases. We had a plan to go to GDC and then we had Covid-19 lockdowns. Unfortunately, we missed that beer together. But still, we had a chance to have further discussions on those topics. At some point, when you trust someone enough, you're able to start showing things that worry you. You can start showing them where you have difficulties beyond the corporate messages and that's exactly the situation with Wes. We were totally in the same mindset. Very quickly, we brought in our teams to see how we could go further beyond our own intentions. And I must say that I was impressed by how fast the top management of both companies went to greenlight the project. When you go to the top management of a company saying, "hey, I want to share player data with a competitor", you need to have two things. First, solid arguments, and also a very strong trust with your partner. And a missing beer sometimes helps.

(Image credit: Riot Games)

How can you tell if something is actually disruptive? If I say "shut up" to a friend, that's very different than it is to someone actually aggravating me. How can this AI tell the difference?

Kerr: That context is sort of the key bit that we get to improve upon over regular social media. So luckily, both Ubisoft and Riot operate games in which we can look at other signals in the game to help coordinate whether or not you're having banter with your friends online, or you're actually talking to a team in a negative manner. And I mentioned we're going to take as little data as possible, but we see a signal such as are you queuing up with friends, or are you queuing up solo. Those sorts of signals come in, as well as other bits and pieces from the game that help provide that additional context that just looking at the raw language won't be able to do alone.

It's still a very hard problem, and why we're looking for support and help across the industry. That is one piece of it and I think the other piece is, as Yves alluded to earlier, there's been a drastic improvement in these language models over the past few years. And their ability to understand context and nuances is getting better all the time. And so we're hoping now is the right time that we can tap into that and actually leverage those cues as well from the model to really be much more confident in the outputs that we provide.

Jacquier: To add to what Wes is saying. You mentioned an example that's maybe you saying "shut up" in two different contexts, means two very different intentions. If I was asking you, if you witnessed a situation where player one says "shut up" to player two, because of the context, because of the repetition, because of the other interactions of the two players together, you would probably be able to say if it was acceptable or not. This is exactly what we want an AI to be trained upon. Even a human because of a background, sensitivity, mood of the day, could also make mistakes and an AI doesn't work differently. What we want to do is to ensure that we're able, based on the latest NLP algorithms, to provide a certain level of reliability to detect most of the harmful content while excluding most of the false positives. And based on that, comes the second step, which is how we will use that. Is it powerful enough to be automated and automatically tag lines or players? Or do we need to add these to a wider process before we implement consequences? Player respect, and player safety are definitely at the heart of what we're doing on that. 

(Image credit: Riot Games)

Language and insults move fast. I remember how quickly the insult "simp" went from rare to regularly used within a short time frame. How is this sort of tech going to keep up with the real evolution of insulting language? 

Jacquier: That's exactly why we focus on the blueprint, but it's not the kind of project where you know, after July, it's done and yoo hoo, problem solved. What we're trying to do here is a pilot. And we agree it's a moving target. It's an ever-evolving target, which is exactly why dictionary-based approaches do not work. Because you have to update that almost in real-time and find all the ways to write profanities one way or another and, and things like that. And we know that people can be extremely creative at times, even when it's to do bad things. So once in your example, once we are able to create such a blueprint, then the idea is to make sure that we always have data sets, which are up to date, to be able to detect any new expressions of harmful content.

Kerr: Yeah, I see this project is sort of never done as language evolves, and changes over time. And I know internally at Riot, we have our central player dynamics team, who runs these protections in production, and works very hard to keep our players safe. And I think this project will continually feed those models and continually allow us to make further progress and improve over time. 

Imogen has been playing games for as long as she can remember but finally decided games were her passion when she got her hands on Portal 2. Ever since then she’s bounced between hero shooters, RPGs, and indies looking for her next fixation, searching for great puzzles or a sniper build to master. When she’s not working for PC Gamer, she’s entertaining her community live on Twitch, hosting an event like GDC, or in a field shooting her Olympic recurve bow.