Since 2013, when the W3C decided to standardize DRM for web videos, activists, security researchers and disabled rights advocates have been asking the organization what it plans on doing about the laws that make it illegal to bypass DRM, even to add features to help blind people, or to improve on browsers, or just to point out the defects in browsers that put billions of web users at risk.
EFF proposed that the W3C should get out of the legal arms-dealing business by making its members promise not to use DRM standardization as a way to get new legal rights to sue people for legitimate, legal activities like reporting security defects, or shifting the gamut in videos for color-blind people, or adding innovative features to browsers.
But the W3C hasn’t just rejected those suggestions — so far — they’ve actually decided to double down on creation and legitimization of legal weapons created through the standards process. In a new proposal the W3C executive has suggested that W3C members should take the new right to sue security researchers that the organization is handing them, and craft voluntary guidelines for when they get to use this.
It’s shocking: the W3C has shifted from unintentionally giving companies the right to silence whistleblowers to asking those companies to agree on some loose guidelines about which whistleblowers will be permitted.
It’s one thing for the W3C to come up with best practices for responsible disclosure: it’s another thing altogether for the organization to give giant tech and entertainment corporations the right to bankrupt security researchers who don’t like the guidelines they come up with.
Companies can and should develop bug bounty programs and other ways to work with the security community, but there’s a difference between companies being able to say, “We think you should disclose our bugs in this way,” and “Do it our way or we’ll sue.”
Under almost every circumstance in almost every country, true facts about defects in products are always lawful to disclose. No one — especially not the companies involved — gets to dictate to security researchers, product reviewers and users when and how they can discuss mistakes that manufacturers have made. Security facts, like most facts, should be legal to talk about, even if they make companies unhappy.
By its own admission, the W3C did not set out to create a legal weapon that would give companies the unheard-of power to force silence upon security researchers who have discovered critical flaws in products we all entrust with our financial details, personal conversations, legal and medical information, and control over our cameras and microphones.
Considered separately from DRM standardization, this new project would be most welcome. The W3C is just the sort of place where we’d like to see best practices guidelines for offering incentives to use managed disclosure processes.
But in creating a DRM standard, the W3C has opted to codify and reinforce the legal weaponization of DRM law, rather than dismantling it. Bad enough that the W3C has summarily dismissed the concerns of new entrants into the browser market and organizations that provide access to disabled people — but in the case of security concerns, they’ve gone even further. When it comes to security concerns, the W3C has departed from the technological standards business to become legal arms dealers.
Indefensible: the W3C says companies should get to decide when and how security researchers reveal defects in browsers
[Cory Doctorow/Deeplinks]