That time my husband reported me to the Facebook police: a case study

[Stanford's Daphne Keller is a preeminent cyberlawyer and one of the world's leading experts on "intermediary liability" — that is, when an online service should be held responsible for the actions of this user. She brings us a delightful tale of Facebook's inability to moderate content at scale, which is as much of a tale of the impossibility (and foolishness) of trying to support 2.3 billion users (who will generate 2,300 one-in-a-million edge-cases every day) as it is about a specific failure. We're delighted to get the chance to run this after a larger, more prestigious, longer running publication spiked it because it had a penis in it. Be warned: there is a willie after the jump. -Cory]

Those of us who study the rules that Internet platforms apply to online speech have increasingly rich data about platforms’ removal decisions. Sources like transparency reports provide a statistical big picture, aggregating individual takedown decisions.

What we mostly do not have is good information about the individual decisions. For the most part, we don’t know what specific statements, videos, or pictures are coming down. That makes it impossible to say how well platforms are applying the law or their own rules, or whether platforms’ overall decisions show bias against particular messages or particular groups. Instead, the public discussion is driven by anecdotes. The anecdotes almost invariably come from a very self-selecting group — people who are upset or have a political agenda.

This dearth of public information probably explains why my own husband decided to turn me in to Facebook for breaking their rules. He clicked on a link, made some choices from a drop-down menu or two, told them to take down one of my posts, and they did — permanently. At the time, I was mad. Now, though, I see this for the gift that it was. My husband was giving me an example, complete with screen shots, of the company’s process for handling a modestly complex judgment call about online speech. Thanks, honey. He was also illustrating the limits of content moderation at Internet scale.

Here’s what happened. San Francisco’s Legion of Honor Museum has a room devoted to Rodin sculptures, including an 1877 nude called The Age of Bronze. A large informational placard next to the piece shows a photo of the original model, a muscular naked man, posing in Rodin’s studio. I put a picture of the placard on Facebook, along with a very insightful and mature comment about how well San Francisco curators know their audience.

Sometime in the next two hours, my husband flagged the picture as inappropriate, and I got this notice:

The moderators decided the post broke Facebook’s rules. They offered me a chance to appeal the decision, which of course I did. That part took two minutes – I appealed at 7:20, and at 7:22 they confirmed that the picture “goes against our Community Standards on nudity or sexual activity.”

Was that the right call? Probably. Facebook prohibits most nudity. It has an exception for nudity in art (driven in part by this case) but apparently doesn’t have one for art history. And they don’t seem to have exceptions for some other categories my post might have fallen into – like “documenting museum practices,” perhaps, or “trenchant commentary about regional curatorial culture.” In an ideal world, I think they would have exceptions for all of these things. But it’s easy to see why they don’t. It’s hard enough to define what counts as art, train a global and culturally diverse workforce to apply that standard, and handle the innumerable opportunities it creates for errors and disputed judgment calls. Adding that kind of expense and complication for even a relatively simple category like art history would seemingly not, in Facebook’s view, be outweighed by the benefits of preserving a handful of posts like mine.

This example is somewhat useful as a case study in content moderation. It’s more useful to illustrate the limits of content moderation at scale. Defining a nuanced rule for a post like mine isn’t hard. Defining that rule and training a global workforce to apply it in a matter of seconds would be difficult. Repeating that exercise for every other kind of edge case – the Kurdish patriotic video that skirts the edge of advocating violent extremism, the parody that uses just a bit too much of a copyrighted song – is literally impossible. There are too many regional and cultural differences. “The future’s orange,” for example, means one thing to mobile phone users in the UK, and something else entirely to Catholics in Northern Ireland. Cultural meaning changes too much depending on the speaker, and the audience, and the passage of time – consider the word “queer,” just as one example. And as any content moderator or grade-school teacher can tell you, humans come up with new ways to offend or praise one another every day. We are ceaselessly innovative. No rule-set will ever account for it all, and no platform is foolish enough to try.

As a society, we won’t make smart laws about online content until we understand better what platforms can and can’t do. We need more examples like this one – a lot more – to tell us what is working, what’s not working, and what is fixable. But we also need to recognize that when our speech falls into interesting gray areas of law or platform policy, it will always be dealt with by blunt and clumsy rules. As long as platforms face more pressure to take things down than pressure to leave things up, the outcomes in those cases will look like this one.



Daphne Keller (@daphnehk) is the Intermediary Liability Director at the Stanford Center for Internet & Society. Her recent work focuses on legal protections for users’ free expression rights when state and private power intersect, particularly through platforms’ enforcement of Terms of Service or use of algorithmic ranking and recommendations.