Boing Boing Staging

Twitterbot experiment suggests that public disapproval by white men can reduce harassers' use of racist language

NYU PhD candidate Kevin Munger made a set of four male-seeming twitterbots that attempted to “socially sanction” white Twitter users who habitually used racial epithets (he reasons that these two characteristics are a good proxy for harassment): the bots could be white or black (that is, have names that have been experimentally shown to be associated with “whiteness” or “blackness”) and could have 2 followers or 500 of them.


Each bot was made to retweet a plausible-seeming selection of random, innocuous tweets to make them seem nonbotlike. Then, when they find white, habitual racial slur users making tweets the contain these slurs, they send that user a single tweet: “@[subject] Hey man, just remember that there are real people who are hurt when you harass them with that kind of language.”


Munger then tracked their behavior for a week to see what change this intervention had on their tweeting; he found that in one case — the high-status, white bot — the use of racial epithets fell measurably (0.3 fewer uses) in the next week.


Moreover, people who used racist language who were chided by the black, low-follower bot increased their use of racist language over the following week.

I couldn’t find Munger’s data, so I don’t know how big the sample size was, and obviously, it would be useful to track this study’s results over longer timeframes, as well as measuring real-world outcomes involving real Twitter users with real followers who might pile on to the subjects after a censuring tweet.

I’m also curious about the ethnography of the subjects: do they reduce their racism because they’re afraid that the high status white user will hold them in low regard, or because they’ve reconsidered their ways, or something else?

Nevertheless, the conclusion I took away from this is that white men who have a lot of twitter followers can make a difference to racist behavior (if not necessarily attitudes) online.

The messages were identical, but the results varied dramatically based on the racial identity and status of the bot and the degree of anonymity of the subject.

Overall, I found that it is possible to cause people to use less harassing language. This change seems to be most likely when both individuals share a social identity. Unsurprisingly, high status people are also more likely to cause a change.

Many people are already engaged in sanctioning bad behavior online, but they sometimes do so in a way that can backfire. If people call out bad behavior in a way that emphasizes the social distance between themselves and the person they’re calling out, my research suggests that the sanctioning is less likely to be effective.

Physical distance, anonymity and partisan bubbles online can lead to extremely nasty behavior, but if we remember that there’s a real person behind every online encounter and emphasize what we have in common rather than what divides us, we might be able to make the Internet a better place.


This researcher programmed bots to fight racism on Twitter. It worked.
[Kevin Munger/Washington Post]

Exit mobile version