Troy Hunt, proprietor of the essential Have I Been Pwned (previously) sets out the hard lessons learned through years of cataloging the human costs of breaches from companies that overcollected their customers’ data; undersecured it; and then failed to warn their customers that they were at risk.
Of real interest in Hunt’s excellent primer is the section on dealing with security researchers: setting up dedicated bug-reporting forms with bug bounties, PGP keys, and other enticements to do the right thing.
It’s advice that more companies could stand to take, but alas, things are going in the other direction. Security researchers normally have the right to choose the time and manner of their revelations about defects in products (telling the truth about security vulnerabilities is covered by the First Amendment!), so companies need to offer enticements to researchers in order to get them to disclose in a way that the company can manage. First among these enticements is the credible promise that the company will do something about the vulnerabilities that public-spirited researchers bring to them.
Alas, this is far from standard practice. Even companies like Google squat on high-risk, showstopper bugs for months without taking action on them, prompting researchers to go public to warn customers that they’re trusting insecure products with negligent manufacturers.
In the balance between security researchers and companies, disclosure is the only real source of leverage. Companies know that ignoring security researchers could lead to uncontrolled disclosures, liability lawsuits, bad press, and tarnishment of their reputations.
That balance is now shifting sharply away from security researchers, thanks to DRM. Laws like Section 1201 of the DMCA establish potential criminal and civil liability for people who reveal defects in systems that restrict access to copyright, even when those disclosures are made to warn the people who depend on these systems that they are defective. Security researchers have found themselves unable to come forward with reports of defects in medical implants, thermostats, voting machines, cars and more.
To make matters worse, the World Wide Web Consortium is standardizing a DRM system for the web that is set to be mandatory in billions of browsers, and rather than protecting security researchers from liability for disclosing dangrous, compromising defects in this system, they’re creating a voluntary system for members to refer to when deciding whether to sue researchers with the new right they’re getting from this standard. Hundreds of the world’s top security researchers (including internal W3C technical staff) have written to the organization to ask them to balance out this DRM with modest protections for security disclosures.
But so far, the W3C has not taken up this cause. W3C members — many of whom would gain the power to censor reports of their products defects — are voting on this, with the poll closing on April 19.
This sounds like a ridiculously obvious thing to say, but it’s something which can be enormously frustrating for those of us disclosing vulnerabilities ethically and enormously destructive when contact is coming from shadier parties. Let’s look at some examples of each.
In October 2015, someone handed me 13M records from 000webhost. I tried to get in touch with them via the only means I could find:
I won’t delve into all the dramas I had again here, suffice to say I tried for days to get through to these guys and their support people acknowledged my report of a serious security incident yet couldn’t get me in touch with anyone who could deal with it appropriately. I eventually went to the press with the issue and the first proper response they made was after reading about how they’d been compromised in the news headlines.
This attitude of not taking reports seriously and dealing with them promptly is alarmingly common. Let’s go back to CloudPets again for a moment and their CEO gave a journalist this comment:
@lorenzoFB “We did have a reporter, try to contact us multiple times last week, you don’t respond to some random person about a data breach.
Yes you do! Random people are precisely the kind you want to respond to because they’re the ones that often have your data!
Data breach disclosure 101: How to succeed after you’ve failed
[Troy Hunt]