This past July, Twitch added a multitude of tags (opens in new tab) to improve the discoverability of channels belonging to users of various identities, including trans, black, and disabled streamers. This also served as a lightning rod for harassers (opens in new tab) to find and direct their hate against those communities. Particularly egregious were so-called "hate raids," where automated accounts would flood chats with derogatory or violent speech.
Things came to a head on September 1st when a group of streamers staged a "#DayOffTwitch" (opens in new tab) to protest Twitch's inadequate response to the issue. Twitch responded with an option for streamers to require phone number-based verification (opens in new tab) to participate in their chats, and this seems to have mitigated the worst of the hate raids.
Twitch's latest addition (opens in new tab) to its privacy and security tools will hopefully give streamers further options for moderation, specifically for dealing with problem individuals getting past bans by making alternate accounts. In its announcement on November 30, Twitch outlined a machine-learning system that attempts to detect such users.
Suspicious User Detection, powered by machine learning, is here to help you identify and restrict suspected channel ban evaders from chatting before they can disrupt your stream. Learn more here: https://t.co/01cCwnQZfw pic.twitter.com/QWVSnRPg1XNovember 30, 2021
On default settings, when such an account is detected participating in the chat their input is either marked in the channel or muted pending moderator action, depending on how likely the algorithm believes they are an unwanted user. Streamers are able to adjust how harsh that initial automatic response is, with the most extreme option being an automatic ban on suspicious accounts.
The announcement included a caveat about the program's accuracy, elaborating:
"One thing to prepare for, particularly around launch, is that no machine learning will ever be 100% accurate, which means there is a possibility of false positives and false negatives. That's why Suspicious User Detection doesn’t automatically ban all possible or likely evaders… The tool will learn from the actions you take and the accuracy of its predictions should improve over time as a result."
I appreciate the tool's modularity and how much control it offers someone when moderating their own channel. Twitch owes streamers at least this much given the extent to which it allowed the hate raiding issue to persist, as well as the awful copyrighted music situation (opens in new tab) and its attendant purge (opens in new tab) of old videos last October.