Boing Boing Staging

The Paradox of Tolerance: should intolerance be tolerated?

The Paradox of Tolerance for Tech Companies


With the rise of white nationalist groups whose allies in government extend all the way to the President of the United States, tech companies are finding themselves in the uncomfortable position of deciding where tolerance begins and ends — where they have a duty to step in and silence certain kinds of speech.


This is a very thorny question.


Today’s strain of revisionist critique of “internet utopians” holds that once upon a time, dumbasses who’d read too many issues of the Whole Earth Review believed that the internet would bring us all together, if only speech were unfettered and free.


The reality is that organizations like the Electronic Frontier Foundation came into being precisely because the people who were excited about the internet also perceived how it could be a terrible force in the world, bringing surveillance, harassment and invasive attacks into our lives in the most intimate and terrible ways. These organizations were founded to guard against these outcomes — not in blithe denial of them.


One important insight from those days is that the internet helps minority and suppressed groups come together to form networks and coalitions, overcoming the transaction costs of finding people with the same outlier ideas as you. The internet helps people with weird hobbies find each other (manga, science fiction, making), and it helps people with socially frowned-upon sexual identities find each other (kink, lgbtq, poly), it helps people with suppressed politics find each other (anti-capitalist activists, Arab Spring dissidents), it helps people with weird sensibilities find each other (Boing Boing, Something Awful) and it helps people whose fringe ideas we hate come together, too (white nationalists, troll armies, flat Earthers, Holocaust deniers, anti-vaxxers, etc).


There’s a folk-tale that holds that pornographers are prolific early technology adopters, first to use 8mm film, new printing techniques, polaroids, VCRs, and the internet. The reality is that mastering each of these technologies cost something in terms of time and money, and the people whose communications were easy and cheap based on the old technologies had no reason to expend the effort to master the new ones. But people whose communications were expensive — whose literature was seized in the mail, whose sales channels were subject to raids and shut-down, whose practitioners could be sued or jailed — were already paying a high price to communicate. For people whose programs could be aired on network TV or shown in national theater chains, VCRs were a stupid, distracting waste of time — but for people who were smuggling 8mm films around in plain brown postal envelopes, VCRs were a godsend.


That’s why pornographers adopt to new technologies before anyone else — it’s also why kids, terrorists, criminals and weirdos (ahem) jump on technological bandwagons ahead of others. They’re already bearing high communications costs, so the costs of learning a new technology are a savings, not an additional expenditure. This is also why assassination markets, tax evaders and drug dealers figured out Bitcoin ahead of everyone else — they weren’t allowed to use Paypal, BACS or SWIFT.

Big Tech got big because it can take advantage of network effects (the more people there are on a platform, the more valuable it is, to a first approximation). It also got big because tech came of age just as the Reagan-era, Chicago-School-led dismantling of anti-trust was really kicking in, allowing a few lucky companies to become global monopolists with unimaginable and unhealthy vertical and horizontal integration.


As a result, Big Tech is big. So big that it can’t possibly examine all the communications and transactions on its platforms. Youtube’s now getting north of 200 hours of video every minute — there literally aren’t enough copyright lawyers in the entire history of the world to police all that video, and automated enforcement systems are stupid, terrible and rife with abuse.


Which brings us to tolerance and intolerance. Tech has historically been tilted towards erring on the side of more speech over more censorship, partly out of the ideology that free speech is a good in and of itself, imperfect but better than the alternative, and partly because policing speech at (Big Tech’s) scale is a technical and logistical impossibility, and so any system to separate good speech from bad speech (even assuming some clear division between them) will either let a lot of bad stuff through, or stop a lot of good stuff (or, realistically, both).

(Advocates for internet controls in service of stopping harassment have a noble cause, but they go off the rails when they say things like, “Google can already stop copyright infringement on Youtube, why can’t they stop harassment, too?” and “Our schools manage to prevent kids from seeing porn, so why can’t we stop neo-Nazis from using Twitter?” Youtube’s copyright detection system falsely flags several Libraries of Congress’ worth of legitimate material every year, and allows several more LoC’s worth of infringing material through — and kids are routinely censored from seeing legitimate material, including a large slice of the primary resources needed to complete the Common Core; and they’re also able to stumble onto mountains of explicit pornography)

The bigness of Big Tech means that a few large companies hold the power to enable or block a large slice of our public discourse. It’s true that the First Amendment primarily refers to government censorship, but there’s more nuance than that. When Comcast has a government monopoly over the internet service in your city and decides to block a site, it’s a private action, but the First Amendment is definitely implicated.


In a similar vein, when the state stands aside and lets a company muscle all the competition out of a market, eschewing its anti-trust powers, the blocking decisions that company makes have far-ranging impacts that go beyond mere private action.

That’s why when Cloudflare decided to kick out the Daily Stormer, EFF Executive Director Cindy Cohn wrote an important, nuanced piece about the implications for all kinds of speech on the web when the largest gatekeepers start to exercise the censor’s pen: “It might seem unlikely now that Internet companies would turn against sites supporting racial justice or other controversial issues. But if there is a single reason why so many individuals and companies are acting together now to unite against neo-Nazis, it is because a future that seemed unlikely a few years ago—where white nationalists and Nazis have significant power and influence in our society—now seems possible. We would be making a mistake if we assumed that these sorts of censorship decisions would never turn against causes we love.”

In the runup to last summer’s white supremacist rally in Charlottesville, VA, Airbnb kicked suspected white supremacists off of its platform, canceling the reservations they’d made for housing during the event. I think that was the right thing to do, and it raises thorny questions about how and when Airbnb should exercise its power to decide who can be housed where (remember that in the USA, there is a long history of denying lodgings on the basis of race and belief, and that this has already been a serious problem for Airbnb).

To Airbnb’s credit, they are serious about coming to grips with this. They engaged Valerie Aurora’s Frame Shift Consulting to help them codify their principles. In a 2017 presentation, Aurora proposed that the only thing tolerant societies couldn’t afford to tolerate was intolerance, something commonly called the “Paradox of Intolerance,” and set out four principles for deciding whom to kick off a platform:


If the content or the client is:

1.
Advocating for the removal of human rights

2.
From people based on an aspect of their identity

3.
In the context of systemic oppression primarily harming that group

4.
In a way that overall increases the danger to that group

Then don’t allow them to use your products.


There is an admirable compactness to this, and Aurora’s presentation goes into detail about adjudicating seeming contradictions between these principles.


It’s certainly given me a lot to think about. I remain concerned about scale: even if this principle is as sound as it appears at first blush, it’s still not one that can be readily automated, and even a company like Airbnb (which handles a fraction of the volume of Twitter, Facebook or Youtube, or even Reddit) would be hard-pressed to thoughtfully adjudicate all the instances in which these principles are implicated.

One thing we know about the suckiness of automated enforcement tools is that they are asymmetrically imperfect. It’s not just that they’re tuna nets that catch a few dolphins — that is, they don’t just accidentally scoop up “good speech” and let through “bad speech” at about the same rate. Rather, they can be made to fail by the application of study and resources. The flaws in these systems can be systematically mined and evaded, which means that well-organized, well-resourced bad actors can get lots of “bad speech” through them, while the accidentally caught “good guys” are more likely to be acting individually, without access to reverse-engineered flowcharts about evading the filters.

That’s how, for example, organized racists can create Facebook groups that are dedicated to racially stereotyping black children, but black people who complain about the cops get kicked off the service. The white supremacists are organized enough to figure out how to skirt violations of the rules, whereas heartbroken black people expressing themselves about police shootings aren’t prone to first closely parsing the Facebook speech rulebook.

I don’t know how we resolve this conundrum. Cohn’s essay counsels that platforms should “implement procedural protections to mitigate mistakes” and the principles that Aurora provides feel like good ones.


But I’m still not sure how we make them work at scale.

The Paradox of Tolerance says that a tolerant society should be intolerant of one thing: intolerance itself. This is because if a tolerant society allows intolerance to take over, it will destroy the tolerant society and there will be no tolerance left anywhere. What this means for tech companies is that they should not support intolerant speech when it endangers the existence of tolerant society itself.

The Intolerable Speech Rule: the Paradox of Tolerance for tech companies [Valerie Aurora/Frame Shift Consulting]


(via 4 Short Links)

Exit mobile version