Riot Games is attempting to crack down on the extreme” toxic behavior that has plagued the online play of the studio's popular MOBA League of Legends for years.
Riot's lead designer of social systems Jeffrey Lin took to Twitter to announce the news, then later appeared on a Reddit post to elaborate. The studio just wrapped up a 24-hour period testing instant 14-day or permanent bans to players who exhibited despicable or unsportsmanlike behavior.
"Today is start of tests where we'll be using a new machine learning approach [plus] Player Support manual reviews to target extreme cases of toxicity," Lin wrote. "Players that get permanent bans will see a ban code of 2500 during these tests."
The banhammer was swung in effort to curb homophobic and racist language, death threats, and feeding” intentionally – where a player dies on purpose to give the opposing team an advantage.
"[W]e've learned in recent months that being transparent is extremely critical to the playerbase's trust in our systems, so we've decided to do a compromise. If players complain about unfair bans for this particular system ... we're going to be fully transparent and posting the chat logs that resulted in the ban.
"Some players have also asked why we've taken such an aggressive stance when we've been focused on reform; well, the key here is that for most players, reform approaches are quite effective. But, for a number of players, reform attempts have been very unsuccessful which forces us to remove some of these players from League entirely."
It's unknown where Riot will go from here following the 24-hour testing phase, as Lin made it clear the team wouldn't commit to a long-term plan until they knew for certain the proper one was in place.