Riot is taking new steps to handle the worst offending trolls in League of Legends by automating bans of players who engage in “verbal harassment” of others. The bans can activate as quickly as 15 minutes after the end of a match where an offender is detected.
Riot explains in detail how the new automated banning system works in a blog post. After teammates or opponents report a player for homophobia, racism, death threats and other forms of excessive abuse the system validates the reports before applying punishment where needed. At that point a “reform card” is sent which pairs chat log evidence of the behaviour with an explanation of the punishment.
These harmful communications will be punished with two-weeks or permanent bans within 15 minutes of game’s end
The automated system tries to learn phrases which are commonly generating player reports rather than using a blacklist of words meaning the system is more adaptable to new words of those trying to work around a more rigid system with creative use of spaces, numbers and special characters.
Aware of the potential for abuse of false positives, Riot said it would have its moderation team manually review the first 1,000 cases handled by the instant feedback system which was released on North American and European servers last week. Lead Designer of Social Systems, Jeffrey Lin said that during early days they saw the false positive rate in the 1:6000 range which is within acceptable limits when dealing with 67 million players.
Some players are obviously not happy with the decision and took to the forums, the issues stemming around Riot’s decision to automate the moderation which Lin commented on via Twitter
I will note that one case of the system being overaggressive is not a reason to shut the system off. Let’s be reasonable everyone!
— Jeffrey Lin (@RiotLyte) May 23, 2015