Kate Klonick (previously) logged into Twitter to find that her trending topics were: "Clarence Thomas," "#MakeADogsDay," "Adam Neumann" and "#Lynching" (if you're reading this in the future, Thomas is the subject of a new documentary and Trump just provoked controversy by characterizing impeachment proceedings as a "lynching.")
She uses this as a jumping off point to demonstrate the complexity of automated content moderation, raising six questions, starting with "Please imagine if the top trend and the bottom trend were next to each other" and "Please imagine if the bottom trending word was proximate and ABOVE the first trend."
But the real meat comes in the last two questions: "Imagine you are Twitter. What do you do about any of it? Do you delete the trends? Do you keep them up? Do you move the ad? Do you make sure those two trends NEVER get lined up together to prevent bad optics?" and "Now write a law that fixes all of this."
The platforms are sick and broken, and when something is broken it's tempting to do something — anything — and declare it fixed ("Something must be done; there, I've done something"). Pretending that this kind of dysfunctional moderation at scale is the result of negligence or intransigence (rather than, say, the curse of bigness) is not helpful.
6 Thought Experiments to Demonstrate the Difficulty of Content Moderation Regulation in 1 Screen Grab:
(1) Please imagine if the top trend and the bottom trend were next to each other
(2) Please imagine if the bottom trending word was proximate and ABOVE the first trend
1/ pic.twitter.com/36SUD2riQe— Kate Klonick (@Klonick) October 22, 2019