The European Commission has a well-deserved reputation for bizarre, destructive, ill-informed copyright plans for the internet, and the latest one is no exception: mandatory copyright filters for any site that allows the public to post material, which will algorithmically determine which words, pictures and videos are lawful to post, untouched by human hands.
These filters already exist, for example in the form of Youtube’s notoriously hamfisted “Content ID” system, which demonstrates just how bad robots are at figuring out copyright law. But even if we could make filters that were 99% accurate, this would still be a catastrophe on a scale never seen in censorship’s long and dishonorable history: when you’re talking about hundreds of billions of tweets, Facebook updates, videos, pictures, posts and uploads, a 1% false-positive rate would amount to the daily suppression of the entire Library of Alexandria, or all the TV ever broadcast up until, say, 1980.
German Pirate Party MEP is fighting the Commission on this, and to help Europeans understand the stakes, she’s rounded up nine sterling examples of algorithms censoring legitimate communications in the name of copyright, including unique material of urgent public interest. Together, these are a kind of taxonomy of the European fully automated censorship future’s failures, the nine ways in which robots will destroy Europeans’ ability to discourse with one another.
3. COPYRIGHT CLAIMS from Mars
NASA’s recording of a Mars landing was identified as copyright infringement – despite the fact that they created it, and as part of the US government, everything the agency creates is in the public domain.
Had camera-shy space aliens filed a takedown request? No, it was a filter’s fault: The agency provided the video to TV stations. Some of these TV stations automatically registered everything that went on air with YouTube’s copyright filtering system – and that caused NASA’s own upload to be suspected of infringing on the TV stations’ rights.
Lesson:Public domain content is at risk by filters designed only with copyrighted content in mind, and where humans are involved at no point in the process.
4. Take it, claim it, delete it
When the animated sitcom Family Guy needed a clip from an old computer game, they just took it from someone’s YouTube channel. Can you already guess what happened next? The original YouTube video was removed after being detected as supposedly infringing on Family Guy‘s copyright.
Lesson: Automatic filters give big business all the power. Individual users are considered guilty until proven innocent: While the takedown is automatic, the recovery of an illegitimate takedown requires a cumbersome fight.
5. Memory holes in the Syrian Archive
Another kind of filter in use on YouTube, and endorsed by the Commission, is designed to remove “extremist material”. It tries to detect suspicious content like ISIS flags in videos. What it also found and removed, however: Tens of thousands of videos documenting atrocities in Syria – in effect, it silenced efforts to expose war crimes.
Lesson: Filters can’t understand the context of a video sufficiently to determine whether it should be deleted.
When filters fail: These cases show we can’t trust algorithms to clean up the internet
[Julia Reda]