As I am writing this, I am in the middle of the Mission District of San Francisco. Outside my window, my tech worker colleagues are gathering on the corners and waiting for busses. These are my community, some in my own field of IT security. Every few minutes special buses go by gathering them up to go to their offices, drastically restricting their interactions with the rest of the world. This is a problem.
Take it from one of those professionals: things out on the internet are bad, and everything is on the internet. Very few people have a comprehensive picture of the problem, and the few who understand are in one of three groups. There those who want to make money off the disasters and don’t care, the governments and institutions who want to use these problems for their power, and, I hope, the largest but (for now) least powerful group: the techies who want to fix infrastructure and inform the public. This essay is written for this last group, and the public who needs us.
The network is in terrible shape. Stolen identities float around by millions. The private information on your phone and computer is all over the internet in thousands of badly insecure servers — we call this the cloud. The Internet of Things and infrastructure are often accessible on the public network. Very few people care about fixing it. All that stuff is as boring as the state of the infrastructure in the USA: nobody wants to invest money in it.
Taking Advantage Instead of Taking Care
Honestly, you, the non-technical people, have good reasons to be scared, because things are scary. Since the beginning of the Snowden revelations, you saw with much more accuracy what techies out of control, with unlimited funding and without much moral principle, can do to your privacy, and to democracy as a whole.
The governments of the world weren’t frightened by the surveillance capabilities revealed by Edward Snowden, they saw what they could have been doing to their internet all along. The NSA documents have read more like a wishlist for governments and law enforcement agencies. Now we see all over the world, governments have new surveillance features to ask their techies to make for them. Very few countries were really pissed off by what the NSA did. They are all working as fast as possible to create their own fancy cybersecurity initiatives, having realized that they were far behind the USA. They are explaining that they absolutely need more surveillance and censorship. Especially after a few articles in the press explaining that this agency or that important company dealing with “Critical Infrastructure” have been compromised, or are vulnerable to compromise. The problem is that all those new organizations with new fancy cyber names have the same goal as the NSA: more surveillance, more control over the networks, less privacy for citizens, and little to no concern about real security.
Leaders and bureaucrats are often looking for two things: quantifiable outcomes and fewer messy humans in the loop. That’s why our decision makers concentrate on fancy automated systems: it can be video cameras on streets, raw internet traffic data, data retention laws, etc..
It ends up in huge databases, managed by private or public organizations containing outdated, private, illegally obtained, and/or wrong information about you. Everything is data. People love data, they can count it and generate nice data-visualizations. Even though it’s often useless.
We know watching everything because you can is not okay in our daily life. If we do it to strangers it’s illegal stalking. If we do it online, with a computer between us and our “target”, and we work for a big organization, it becomes okay.
None of this is real security, which continues to languish. All the data leaks and all the computers keep breaking. Instead of working on fixing the infrastructure (also known as the internet in general), our leaders increase the offensive capabilities, because we believe that the best defense is attack.
This is like building a nuclear power plant on the slope of a volcano and then protecting it by bombing a neighboring country.
Everybody’s Fault
Our leaders can only push their scary-bad ideas because most people don’t understand technology. This includes not only the citizenry, but politicians, lawyers, and journalists. These people, who are meant to protect the public, don’t have the understanding to come up with the arguments to fight bad ideas at the source, before they become law.
This situation leads to a massive misunderstanding between technical folks and the rest of the world. The outcome of this is more antagonism, distrust, and desire to make more things illegal, hoping it will make all those complex issues disappear. This response is reasonable, because to understand something new, and scary, we always use existing tools. Even if they are not appropriate. Rarely are old laws and metaphors the right way to explain and regulate how the internet works.
This problem is partially the techies’ fault, because we dislike involving different people in our techie clubs. But it’s not entirely our fault, understanding what we are doing means doing some homework. One does not start playing a new musical instrument at the same level as a professional musician. There are concepts that a newbie needs to tackle to understand the network our world runs on, but they are largely on their own figuring that out, at least right now. Few people even know where to start.
Security Research is Not a Crime (or at least it shouldn’t be)
Without the rest of society having any idea how the net works, no one makes the right security choices. Something actually useful such as code review, in order to find (and fix…) vulnerabilities, is largely neglected, even in code everyone in the world relies on. If we researchers look at the code ourselves, we will at best get into the Common Vulnerabilities and Exposures (CVE) system, which means everyone will be informed and have the possibility to patch in time before the vulnerability spreads, too much, we hope. If we are lucky and hit upon a catchy name, talk to the press, it might get fixed. Even quickly. That’s what we call responsible disclosure: the researcher contacts the people with the vulnerable code and helps them before the vulnerability goes public.
If we are less lucky, we are asked more or less nicely by an army of lawyers to STFU and the flaw never gets fixed. This is more common, “Responsible disclosure” doesn’t have any equivalent of “Responsible response”. Either way, not much happens, everyone forgets about it within a couple of months and a researcher’s life may or may not have been destroyed.
All that, because securing computers is not sexy. Jobs like mine try to make sure nothing ever happens, and if we can stop the shit before it hits the fan, we do it. And we often get blamed when it hits the fan anyway, especially if we had the terrible idea of asking too many questions.
There are two main consequences to putting us, the security researchers, in a position where we cannot do research without risking criminal charges and civil suits: the first one is the increase of the dark-gray market where people with less acceptable moral principles can sell, and buy, the discoveries. The second one, if a researcher doesn’t want to risk seeing their discoveries ending up only in the wrong hands and never being fixed, is anonymous full disclosure where the findings are simply released, publicly. And then we watch everyone freak out.
This is a terrible situation. It makes people feel powerless because it seems like all the power of big corporations or governments are against us. Governments have made sure to keep the upper hand in the situation, while doing what they always do to keep their population happy: speeches, and a bunch of new laws targeting edge cases or anyone not working to their advantage.
Finding 0 days is bad, except if you sell it to governments (especially your own). Creating malware is bad, but if you sell it to police, it becomes okay.
Accountability is Culture
Why aren’t we making those organizations, governments accountable? Because we have no grip on them. When we talk about the police, we should be talking about police officers. When we talk about Government, we should be talking about politicians. When we talk about Intelligence agencies, we should be talking about… those people working there. We don’t even have a proper word to name them. The members of those agencies are not identifiable human beings from the outside of their institution, and increasingly it seems we are not either, when viewed from inside their bubble: we are simply a noisy crowd creating data to gather.
What we need now is a social shift. It is going to be hard, take time, and require a lot of talking. Being able to access anything valuable or private isn’t inherently bad, and it doesn’t mean a person with access will always do something bad, just because they can.
I’m not talking about legality, because a government which want a database to be legal for its own use will find a way to do it, whether it’s wrong or not. We need to accept that there are people using those databases. But an employee can decide that the work of the organization, even that pays their bills, is not ethical. This is the same as a soldier, who has to disobey orders when their own country is committing war crimes. That soldier maybe punished or even shot if caught, although they will likely be vindicated by history. An IT guy will lose an awesome paycheck at the end of each month — not quite as bad.
I don’t think any government or other central entity can solve this problem, it is a problem of the whole society, and we have to solve it together. Part of that solution is the ones behind computers digging into the lifes of others getting more conscious of what they are doing.
An example: When I start a forensic investigation on a malware case, I have a snapshot of the memory and a disk acquisition. Say it’s your computer, and it may have had everything running, private and company documents, mails, browser tabs with social network accounts, basically your whole life.
At that point, I have the technical capability to extract all the passwords of all the social accounts, mailboxes and to read all the private messages, and nobody will know I did it. Say the computer was infected by 3 different pieces of malware (not uncommon). Any of those attackers could be using their malware for months from a Tor exit node to stalk the victim. Will I abuse this access? Nope. Can I prove any abuse didn’t come from me? Nope. Should the investigation be forbidden? I don’t think so. Should I have signed a Non Disclosure Agreement (NDA) reviewed by my lawyer, your lawyer, and amended 10 times over 2 weeks? Well, that’s your problem, you are the one compromised, I can wait.
NDAs in IT security are bullshit. Always. If I wanted to hurt you, I just could. I only have your data because an attacker was on your machine. If your data ends up on the internet, I can easily blame that person.
Or you can simply trust that I will do a ethical work and let me start immediately. And that’s the tricky part: you will never know who is going to do honest work, and who is not, because we’re still all humans. And on top of all of that you can’t even be sure I will find anything that can help you.
Having the capability to do something harmful does not mean everyone is going to do it.
Certification and monitoring isn’t the answer. A person being able to work on their own without being vouched-for by some kind of central entity is the basics of research and science. We don’t want to go backwards on that, and require special authorization to have the right to use nmap, or metasploit, any more than we want scientists to check their climate discoveries with Congress.
Nowadays, governments and civil society sometimes target the independent security researcher because we aren’t really accountable. This is true, but we don’t even have a framework of accountability. Some do bad, but most of them do good, and we need to support the second ones, while having a talk with the first ones.
Maturing as a Field
Being ethical is hard. You can’t measure it, it takes time to learn it, and you may never know when someone fucks up. But being human is hard, so is life, and if you want to get stuff done, reducing everything to paperwork will not save your ass for long.
Now is the time we in IT need to think about some kind of code of conduct. We desperately need something we can refer to when we are not sure what should be happening, how we should be responding to some event in the world. But for that to work, we also need to accept that we are a political group with some real power and not just a bunch of kids playing with bytes.
I’m not arguing for legally enforced rules and I don’t want the debate to go in this direction. We are not lawyers, we are hackers, and we know that any kind of rules can, and will, be bypassed. It is our job.
The laws are comparable to the technical limitations that a developer puts on a web page to forbid you to read it without entering your email address: disable JavaScript and it just works. Our job, and most of the time also our hobby, is to bypass those limitations. Not to do anything bad, but because it is fun. Sometimes we even tell developers that they should do it differently, to be safer.
Obviously, saying “Trust us, we are just a bunch of loud dorks with weird hobbies, looking at vulnerabilities in your infrastructure. Most of us are nice but some of us are going to sell them to random people,” doesn’t make anyone more comfortable.
What we need now is to see IT security become a profession. Even if we love to think we are the only ones in such a situation, with a lot of very crucial knowledge on a very specific topic, we aren’t. Lawyers, doctors, priests, and journalists, for example, have similar requirements. But as those activities are way older than ours, those professionals have had more time to think about this problem. They found solutions, not perfect, but livable for society. Those solutions are in the form of some kind of code of conduct. None of them are perfect but, at least they have ethical codes which can be referenced when everything goes bad.
For the lawyers, every association has its own code of conduct, all of them are huge and cover everything from the confidentiality rules to the way they deal with colleagues from other countries to how much they can be paid. It’s huge and let’s be serious: few of us will allocate enough time to write that kind of document, and no one will read it.
On the other hand, if you look at journalists, they have something called the Munich Charter, as one example of a code of ethics. It is a set of 15 sentences, with no legal force. The enforcement is by the community, and when you fuck-up hard, you are not a journalist anymore. It has a lot of flexibility, or loopholes if you prefer, but it sets the basic principles.
Doctors have the same kind of idea with the Hippocratic Oath.
I would love to see a similar thing happening in IT security: our field needs to become a profession, with basic rules, lines in the sand, that we all know should not be crossed, instead of a letter of the law that can remain unbroken while all of us hackers work out how to violate the spirit of it as fast as we can.
That’s the reason I want something that people in the community agree on, something that can evolve fast enough so that we can call out the ones behaving unacceptably without having to wait for a legal framework to bootstrap every time someone is being a dick, or selling 0 days to abusive governments.
But in order to get this plan to work, we need a real community as people who care about the world. We need to go from a mostly technically oriented group to a more social one. I think we are slowly but surely getting there: we are all very lazy and we want to know what is going on elsewhere because the most annoying thing is to re-do the same thing a colleague has already done. That’s why there are so many conferences everywhere. Even if we may not be the most socially adept people around, we love to talk, we love to share ideas with our peers. There is a point where we, as a community, should decide what we want to be associated with, and what we don’t. If a member, or a set of members, is not acting as a responsible part of the group, we have to call it out. If possible, to get the wayward back on track, but if not, to exclude them from our community.
Ultimately, what we need is to raise the consciousness of our friends and peers. Let’s use the nasty words: we need to be more political, more responsible, and to use our power for good.
There are many forces, corporations and governments, that don’t want us to learn to be a real community. They want to keep us in our bubble, and make us do their dirty work. We are kept the same way as the mission buses are keeping the IT workers in their bubble. We all have to get out and to deal with the world the way it is: messy, complex, and full of humans.
Photo courtesy Shutterstock