Twitch, the livestreaming service that largely caters to gamers, has exploded in popularity since being acquired by Amazon in 2014—but toxicity on the platform has also increased. This week, Twitch took an important step toward getting a handle on its applause-like “chat” feature, and it goes beyond the usual dictionary-based approach of flagging inappropriate or abusive language.
The Tuesday rollout of a new “ban evasion” flag came with the kind of fanfare such a feature doesn’t normally get on social media and video platforms. And it puts Twitch in a position to say what many other platforms don’t: the company is not only paying attention to “sockpuppet”-account generation but pledging to squash it.
Spinning up attacks
Pretty much any modern online platform faces the same issue: users can join, view, and comment on content with little more than an email address. If you’d like to say nasty things about Ars Technica across the Internet, for example, you could probably spin up a ton of new accounts on various sites in a matter of minutes. Your veritable anti-Ars minimob mostly requires a series of free email addresses. Should a service require some form of 2FA, you can arguably go to extra lengths to attach spare physical devices or spin up additional phone numbers.
In less hypothetical terms, Twitch creators have dealt with this in the form of “hate mobs” for some time now—with the issue peaking in intensity and public outcry after Twitch, among other things, added an “LGBTQIA+” category. Abusive users charged up hyperfocused lasers of hateful speech, usually directed at smaller creators who could be discovered within Twitch’s category directory. As I explained in September:
While Twitch includes built-in tools to block or flag messages that trigger a dictionary full of vulgar and hateful terms, many of the biggest hate-mob perpetrators have turned to their own dictionary-combing tools.
These tools allow perpetrators to evade basic moderation tools because they construct words using non-Latin characters—and can generate thousands of facsimiles of notorious slurs by mixing and matching characters, thus looking close enough to the original word. Their power for hate and bigotry explodes thanks to context that turns arguably innocent words into targeted insults, depending on the marginalized group they’re aimed at.
Battling these attacks on a dictionary-scanning level isn’t so cut-and-dry, however. As any social media user will tell you, context matters—especially as language evolves and as harassers and abusers co-opt seemingly innocent phrases to target and malign marginalized communities.
Twitch’s “ban evasion” flag takes a different approach: it leverages Twitch’s server-side information about account generation to decide whether a newly generated account appears to come from someone who already broke the site’s Terms of Service, then created another account. Should this flag go live, the user in question won’t notice anything different, but streamers and their moderators will see that user’s messages land in a special “ban evasion” moderation silo. This is generally known as “shadowbanning,” since messages appear to work just fine for the banned person, but nobody else in a chat feed sees them.
If the account was moderated in error, or the streamer doesn’t see any issue with the flagged user’s messages, the flagged user can be unbanned and brought back into public chat as normal. Otherwise, the channel can either loudly kick/ban the flagged user or leave them in a shadowbanned stupor, typing to nobody but themselves. (Twitch also has a milder version of this flag that leaves the user’s chatting abilities untouched but gives moderators a heads-up that an account seems suspicious.)
This move appears to split the difference between immediately shutting down suspicious accounts and letting them roam free and unchecked across Twitch’s hills and valleys. And it follows a new optional toggle for individual channels which launched in late September. Hosts can choose to restrict chat features to users who’ve verified their phone number and/or been live on Twitch for a particular amount of time.
The important thing about the ban-evasion flag is that it sees Twitch admit to something that “engagement”-obsessed social media sites hate to admit: fake accounts exist and they can poison whatever platforms they’re created on. This tends to run counter to the metrics that site operators love to show off when seeking advertising partners; social media sites would rather not add asterisks to any counts of “monthly active users” when determining advertising rates.
More to learn about machine learning
Still, by coming out and saying to users, “bad actors often choose to create new accounts, jump back into chat, and continue their abusive behavior,” Twitch has drawn a line in the sand. Twitch has gone on the record confirming that this issue exists on its platform—in a way that has arguably not been declared by leadership at sites like YouTube, Twitter, Facebook, and other heavily scrutinized social media platforms.
That means the company has officially invited accountability and scrutiny on the issue going forward. This is also a clever method to target problematic behavior on a site that supports languages and regions all over the world, since the technical process of sidestepping Twitch’s ToS is arguably less prone to getting lost in translation than different languages’ hate speech. Still, in order for ban-evasion flags to appear, the accounts in question have to actually get called out or banned—and that means Twitch’s vague pledge of “develop[ing] more tools to prevent hate, harassment, and ban evasion” will arguably require more work, moderation, and supervision than its automated tools may yet be fully prepared to deliver.
Twitch’s Tuesday announcement includes a vague description of “machine learning” as a determiner of whether an account is “suspicious,” though the platform doesn’t further clarify what data sets it’s polling to make that determination. Is Twitch looking for specific IP addresses? Traces of VPN usage? Incognito browsing sessions? Cookies that a careless harasser failed to clear? Or specific chat phrases used within the first few hours of the account’s creation? It’s unclear what massive data sets Twitch is scanning in a way that can hone in on seemingly inauthentic accounts.
As a response to our questions about this system, a Twitch spokesperson offered the following statement: “The tool is powered by a machine learning model that takes a number of signals into account—including, but not limited to, the user’s behavior and account characteristics—and compares that data against accounts previously banned from a Creator’s channel to assess the likelihood the account is evading a previous channel-level ban.”