Boing Boing Staging

How collaborative social blocking could bring sanity to social networks

The wisdom of crowds isn’t always that smart. We’ve all seen mobs emerge online, or read uninformed Wikipedia entries that are the work of many idiotic hands. Where crowds help online is when you find your own particular gang: a group of people with whom you have aligned interests for support, complaints, laughs, and profanity.

Social networks allegedly are built around this simple idea, but abusers, trollers, griefers, harassers, and others in the bestiary either enjoy or compulsively seek out people to harm. While Facebook can be a walled garden — despite its best efforts — where you cultivate a set of close friends and maybe interact with friends of theirs, the wide-open spaces of public Twitter accounts make it easy to be attacked constantly and singled out. It is a game for some to gather forces and assault a person until they give up and leave.

Twitter’s basic rules give individuals and groups of people asymmetrical power as long as they are persistent and awful. The company’s response on average is inadequate even to egregious reports, and its rules about what constitutes harassment restrictive in a way that diminishes victims’ ability to gain redress and find safety. Samantha Allen, a fellow at Emory University, and the recipient of waves of abuse this year when she wrote about games journalism, says, “As it’s currently built, Twitter wins during harassment campaigns and we lose. We have to accept working and socializing in an unsafe environment because Twitter doesn’t want to permanently ban users or implement more drastic penalties for abuse.”

Only the attacked party can report abuse; deleted tweets can’t be used as evidence (nor can screen captures); adjudication of a result is at Twitter’s internal discretion, can take time, and doesn’t seem subject to appeal; and despite Twitter’s ostensible best efforts, individuals can create numerous new accounts to continue their behavior when one is deemed abusive and suspended or banned. Sarah Brown, a former UK elected official, and for a time the only openly trans politician in office, says, “I blocked most of [her attackers] manually, but a common technique of harassers is to create new accounts and try again from there, so it was difficult to keep up, and the ones that got through were often very distressing.”

Those affected can mute and block one at a time, even as new accounts spawn or new attackers emerge; they can lock their account, and withdraw their public participation. They can even shut down an account: silencing their voice and removing themselves from a community.

Social problems can’t be solved solely by technology, even within a social network. But tech can restore balance, even when it doesn’t fix inherent problems that arise when individuals interact. Collaborative blocking offers one potential for giving people who experience harassment — or simply wish to avoid it in the first place — the wisdom and force of a crowd that has their backs.

We all block together when we block

With collaborative blocking, a group of people create a list of accounts to block or, in some cases, mute. The list is propagated through a Web-based app that allows people to opt-in with a Twitter account, authorizing the app to carry out certain behavior on their behalf. Twitter allows clients and specialized apps to block, mute, and unfollow, among other actions.

The services add blocked or muted accounts on a continuous basis to each subscribed account, throttled against Twitter’s rules for frequency of updates. Twitter declined to comment generally on harassment policies and related issues for this article, but confirmed that third-party apps like these are valid uses of its developer tools.

(A brief aside on Twitter terminology. When you block on Twitter, the blocked account is removed from your followers if it was among them, and can still see your timeline, but cannot use Twitter’s built-in retweet feature or favorite your tweets; Twitter also suppresses you seeing @ mentions directly in your feed. Blocking does not equate to a spam report, which is a separate feature. A muted Twitter user doesn’t lose follow, RT, or mention privileges, but the account that has muted the user doesn’t see that user’s tweets in the timeline. Some third-party apps have their own mute options, such as duration-based suppression, which edit the timeline as displayed in the app.)

The fact is that, as with most egregious behavior, the worst offenders are a minority of the people who may engage in abuse, but through the sheer volume of their interactions, their recidivism in abusing again and again, and their typical targeting of many individuals (sometimes in the same tweet), they are also relatively easy for a group to spot, mark, and block.

The asymmetrical benefit thus comes from a way to prevent those identified as fitting the profile of a block-worthy account from being knocked out of circulation quickly enough that subscribers to a list don’t experience that person’s tweets — or, because they are blocked, those tweets are automatically deleted from a subscriber’s timeline. This reduces the effectiveness of any given piece of abusive text, as it may not reach intended victims and, even if it does, it doesn’t persist.

It can also provide an effective counter to the trolls and villains who create endless numbers of accounts to speak their maledictions. No one person has to find and kill all these accounts; the load (and thus psychological toll) is distributed among all those who maintain the list. The thrill of knocking out abusers may also counter some of the aggravation, too.

Of course, the devil is in the details as he always is: the people who mark accounts as abusive for a particular blocking tool are as human as the rest of us. Whatever criteria are used by which accounts are marked as unsavory to a particular group or service, the party being blocked will disagree with the action, as might people who have opted into the group list.

Do these tools constitute censorship or an abuse of free speech? Only to the same extent that any consensual, collaborative exclusionary process run for the benefit of its members receiving only the speech they wish to hear. Collaborative blocking is as much censorship as is Spamhaus, the service that uses a variety of methods to prevent some of the bazillions of unsolicited commercial emails from reaching their destinations.

People opt into collaborative blocks. The people excluded may still use Twitter; they may still even read the timelines of the accounts that have excluded. What they can’t do is force people to hear what they have to say. And that enrages people who believe they have a right to speak in every fora. To paraphrase Stephanie Zvan, quoted later, people challenge exclusions when they think that all spaces in a given realm must also be their spaces. This is as true with street harassment as it is with Twitter.

The mechanics of blocking apps

The Block Bot seems to be the first or first widely used collaborative blocking tool, and its original developer, James Billingham, says that he had to rewrite it last September to better conform with Twitter’s app rules. Block Bot’s administrative governance remains tied to the Atheist+ movement, a group attempting to accommodate diversity in “organized atheism,” as Stephanie Zvan describes it. The five administrators are part of the A+ forum. Admins can add blockers who don’t need to be part of that group, and there are 33 such now. Accounts authorized by Block Bot are monitored for hashtag-based commands, such as “+ #AddBlocker”.

But Block Bot’s utility has shifted well beyond facilitating discussions about atheism, with a broad inclusion of people committed to ideologies around men’s rights (MRAs), anti-feminism, and exclusionary feminism (opposition to trans people and sex workers). These are all terribly loaded terms, and I’ll get to how that’s dealt with in a moment.

Block Bot users can opt in at Level 1, 2, or 3; Level 2 includes Level 1 blocks; Level 3, both 1 and 2. Level 1 blocks those sorts of people that most sensible people would agree were prima facie abusive; it doesn’t take a strong ideological association to identify this sort of threat or abuse, or to recognize stalking and fraudulent accounts. Level 2 adds ideologically identified people who may or may not be per se abusive. Level 3 could be defined as the clueless and irritating.

To meet Twitter’s rules, Block Bot provides a public list of those currently present in each level. Billingham notes, “It never blocks someone you follow; also if a user unblocks someone it never reblocks them.” To avoid triggering Twitter’s spam reporting algorithm, blocks are added in small waves. Block Bot has a tool, partly at Twitter’s request, to remove blocks by level (or even all blocks) on an account when one leaves the service. On August 8, the Block Bot account tweeted that the service had applied about 320,000 blocks to its subscribed accounts over the course of seven days.

Block Bot is currently hosted on Github as an open-source project for non-commercial use, and Billingham says he and others now helping with the project have a code rewrite plan to make it work in a more distributed fashion.

Two other projects are also underway. Block Together has a similar aim as Block Bot, though it’s more of a framework that the developer, Jacob Hoffman-Andrews, is looking for help with, and is building and testing through a beta stage. It will also be replicable, and could function as a plug-in option to a site that wants to offer a list to members, or be a component of a more comprehensive Web-based Twitter app.

Flaminga takes a broader approach, though it’s still very much in its early stages of development by its creator Cori Johnson. “Considering how little Twitter has done to solve this problem, I want to give users as many options as possible to manage their received content however they choose,” she says. Flaminga will allow shared blocked lists (and potentially make use of Twitter’s mute, too) among friends, rather than a single list for an entire deployed service.

Johnson has a few key additional ideas, too. She’s looking into markers that track the age of Twitter accounts as a way to prevent newly registered accounts, often used for spamming and abuse, from appearing in one’s timeline. She notes, “A small team successfully prototyped age filtering at a hackathon I co-ran in February of this year and we are now digging through that code in hopes of making that tool accessible to the public.” And Flaminga could use keywords in Twitter profiles for filtering as well.

But does it work?

It’s all well and good to talk about the technology, but does it work? Block Bot is the only tool in use over time and by a large enough number of people to have the potential to provide effective results. I spoke with three people who have been individually or communally the target of harassment and have used Block Bot. The answer seems to clearly be: yes, it helps.

Samantha Allen writes regularly about gender, sexuality, and technology, but a column earlier this year about a major gaming site provoked an unbelievable shitstorm of harassment driven, as it often is, by Reddit and 4chan. Allen says she signed up for Block Bot after having her partner sift through her feed, when she needed back on for personal and professional communication.

She found a large overlap between the people on Block Bot’s list and those targeting her. “I was happy to accept some imperfections in the process, just so long as I could quickly and dramatically cut down on the volume of harassment,” she says. She finds that the service has already blocked people harassing others she knows before she even has a chance to do so. She says:

It’s definitely made Twitter more livable for me, at least in the short term. I know that it might end up blocking a handful of people that I wouldn’t otherwise want to block, but when you get the kind of unwanted attention that I regularly receive, you just have to accept that you have to make little sacrifices like that for your piece of mind.

Sarah Brown, the former official in the UK, has had both offline and online abuse due to her openness as a trans woman in politics. She says that some of her Twitter harassers collaborated openly and then sent pickets to an event because she was speaking. “It gets a bit much when social media bullies start turning up to events one is at to harass!” she says.

She was initially skeptical of Block Bot because of the attacks on social media she has received from what she describes as “mostly (but not exclusively) women,” and the group with blocking power is mostly women. However, the group is sympathetic to the kind of harassment she has received and she, too, finds accounts blocked before she even knows an attack is underway. She’s now one of Block Bot’s team of blockers. (You can read Sarah Brown’s response in full on her own blog, too.)

Kristy Tillman was one of the black women feminists targeted as part of a campaign by 4chan that was a combination of lulz and provocation, trying to fool people in that self-defined group into fighting among themselves. “They were making dubious accounts over and over again and harassing people,” she notes.

Tillman notes that she joined the list, and nearly every account she knew of that was part of this campaign was already blocked. In Tillman’s case, she didn’t feel so much harassed as annoyed, and having her time wasted. “I wanted to avoid being caught up in the 4chan trolling. I didn’t want those people bothering me,” she says.

Nearly everyone I spoke with directly or via social media recognizes that collaborative blocking is an imperfect, third-party method of dealing with a fundamental flaw in outlook at Twitter. Allen notes, “I can’t help but wonder what Twitter would look like if it had been designed by people who weren’t straight white men. Would there be a voluminous form to fill out, for example, when someone is harassing you? Probably not.”

Tillman, who deals with interaction design on a professional and regular basis, says, “People who make software make software for people like them, and they don’t have the cultural competency to consider other use cases.” She says, “I know what it’s like when no one’s in the room who can represent a particular view.” She notes that Twitter has been experimenting with @MagicRecs, which provides custom notifications based on people you follow about accounts, tweets, and topics they’re engaged in. “When ten of my friends block someone, why don’t I get a notification of that?”

Johnson has an expansive view about her project, Twitter’s response to harassment, and the utility of collaboratively finding a solution:

It’s no longer about any one product or solution, but a movement to make this a priority in the tech scene. Most of us in tech use Twitter. Many depend on it as part of their livelihood. And many people in tech are all too familiar with these problems. We can’t ignore it; it would be irresponsible of us not to. Twitter, the company, has maintained glacial, occasionally backward progress on this problem, so, as unfair as it may be, it’s up to us as a community to fix it.

Afternote on looking in the mirror

If you’re an American white, straight, cisgender male reading this, you may wonder why anyone would need such a tool. The few inconveniences you’ve experienced are easily dealt with by a click of a Block button. I fit that category of human, and I have experienced essentially no online harassment, whether on Twitter or elsewhere. Many women and people who fit intersections outside mine who I know experience regular, sometimes truly horrific and extreme online abuse, extending to often credible threats of in-real-life violence. Even a harmless public tweet can be met with vitriol.

My friend Brianna Wu, who has written a number of times this year about a lack of women in games journalism and software development, gets rape threats for talking about essentially business-related issues. The better known you are as a white man, the more public respect and accolades you typically (not exclusively) accrue, and the more defenders. This is not true for others. Jessica Valenti, who writes widely about culture and politics, including feminism, tweeted a query about whether any country subsidized tampons or offered them for free, and received these replies.

If you don’t experience online harassment, abuse, and stalking, consider yourself lucky rather than stating it doesn’t exist.

Exit mobile version