Thailand's insane lese majeste laws make it radioactively illegal to criticize the royal family, reflecting a profound insecurity about the legitimacy of the ruling elites there that can only be satisfied through blanket censorship orders whenever one of the royals does something ridiculous, cruel or both (this happens a lot).
The latest scandal is more ridiculous than cruel: a video of the king of Thailand cruising a mall in Munich in a hilarious crop-top. But the aftermath was cruel enough: a series of overbroad censorship orders targeting both independent Thai websites and social media services like Facebook.
Here's where it all gets interesting. Facebook — and its rivals — have opened themselves to censorship by opening offices in basket-case states like Thailand and Russia, putting their employees in reach of those nations' torturing, corrupt justice systems. But they've also implemented HTTPS, which encrypts connections to their services, making it much harder (and often impossible) for law enforcement and spies to determine which pages their citizens are visiting, even under systems of extensive surveillance; and they've also implemented "user notification" systems that let users know when governments have ordered them to censor some resources and assets within national borders.
The effect of this is fascinating: when someone in Thailand gets a link to a video that the royals don't want seen, they are notified that they're not allowed to look at it. But there are lots of copies of this video that the government hasn't yet found and subjected to censorship orders, so a quick search is all it takes to find a mirror on Facebook (or another social network) and users can watch these mirrored copies with impunity because HTTPS makes it impossible for the police to know which Facebook pages they're looking at.
This has wide implications for other repressive states attempting to control the spread of dissident information in their national borders (for example, the promise by Theresa May to comprehensively censor the UK internet). To make this work, states will need to suppress working encryption (some former Soviet states in the Caucuses make it a crime to use a browser unless it has a domestically issued certificate that allows spies to launch man-in-the-middle attacks on HTTPS and see all private communications, though this will be harder and harder to implement, thanks to Certificate Transparency).
They will also have to forbid user notification, making all their censorship orders secret, because knowing that something has been censored is sufficient information to find a copy of it — the system currently deployed by "child porn filters" that overwhelmingly and secretly block sites that have nothing to do with the sexual abuse of children.
HTTPS and user notification systems massively raise the political and technological costs of censorship, but there are countermeasures states can deploy. At best, these technological fixes can give us the breathing room to organize and effect political changes that make them less necessary — while using commercial pressure to get companies to pull out of repressive states where political change is slow (imagine if Facebook had a big office and lots of execs in Iran, and counted on the country for appreciable revenues — it would be nearly impossible to get the company to buck the regime).
As Larry Lessig taught us, political change arises from shifts in four realms: the law (what's legal?), markets (what's profitable?), code (what's technologically possible?) and norms (what's socially acceptable?).
In an ideal world, timely and informative user notice can help power the Streisand effect: that is, the dynamic in which attempts to suppress information actually backfire and draw more attention to it than ever before. (And that’s certainly what’s happening with the video of the King, which has garnered countless international media headlines.) With details, users are in a better position to appeal to Facebook directly as well as draw public attention to government targeting and censorship, ultimately making this kind of censorship a self-defeating exercise for the government.
In an HTTP environment where governments can passively spy on and filter Internet content, individual pages could disappear behind obscure and misleading error messages. Moving to an increasingly HTTPS-secured world means that if social media companies are transparent about the pressure they face, we may gain some visibility into government censorship. However, if they comply without informing creators or readers of blocked content, we could find ourselves in a much worse situation. Without transparency, tech giants could misuse their power not only to silence vulnerable speakers, but also to obscure how that censorship takes place—and who demanded it.
ONLINE CENSORSHIP AND USER NOTIFICATION: LESSONS FROM THAILAND
[Gennie Gebhart/EFF]