Boing Boing Staging

The coming civil war over general purpose computing

The Coming Civil War over General Purpose Computing

By Cory Doctorow

Even if we win the right to own and control our computers, a dilemma remains: what rights do owners owe users?

This talk was delivered at Google in August, and for The Long Now Foundation in July 2012. A transcript of the notes follows.

I gave a talk in late 2011 at 28C3 in Berlin called “The Coming War on General Purpose Computing

In a nutshell, its hypothesis was this:

• Computers and the Internet are everywhere and the world is increasingly made of them.

• We used to have separate categories of device: washing machines, VCRs, phones, cars, but now we just have computers in different cases. For example, modern cars are computers we put our bodies in and Boeing 747s are flying Solaris boxes, whereas hearing aids and pacemakers are computers we put in our body.

[[VCR, washing machine] [[747]] [[Hearing aid]]

• This means that all of our sociopolitical problems in the future will have a computer inside them, too—and a would-be regulator saying stuff like this:

“Make it so that self-driving cars can’t be programmed to drag race”

“Make it so that bioscale 3D printers can’t make harmful organisms or restricted compounds”

Which is to say: “Make me a general-purpose computer that runs all programs except for one program that freaks me out.”

[[Turing – 1]]

But there’s a problem. We don’t know how to make a computer that can run all the programs we can compile except for whichever one pisses off a regulator, or disrupts a business model, or abets a criminal.

The closest approximation we have for such a device is a computer with spyware on it— a computer that, if you do the wrong thing, can intercede and say, “I can’t let you do that, Dave.”

[[Hal]]

Such a a computer runs programs designed to be hidden from the owner of the device, and which the owner can’t override or kill. In other words: DRM. Digital Rights Managment.

[Defective by design]

These computers are a bad idea for two significant reasons. First, they won’t solve problems. Breaking DRM isn’t hard for bad guys. The copyright wars’ lesson is that DRM is always broken with near-immediacy.

DRM only works if the “I can’t let you do that, Dave” program stays a secret. Once the most sophisticated attackers in the world liberate that secret, it will be available to everyone else, too.

[[AACS key]]

Second, DRM has inherently weak security, which thereby makes overall security weaker.

Certainty about what software is on your computer is fundamental to good computer security, and you can’t know if your computer’s software is secure unless you know what software it is running.

Designing “I can’t let you do that, Dave” into computers creates an enormous security vulnerability: anyone who hijacks that facility can do things to your computer that you can’t find out about.

Moreover, once a government thinks it has “solved” a problem with DRM—with all its inherent weaknesses—that creates a perverse incentive to make it illegal to tell people things that might undermine the DRM.

[[cf felten, huang, geohot]

You know, things like how the DRM works. Or “here’s a flaw in the DRM which lets an attacker secretly watch through your webcam or listen through your mic.”

I’ve had a lot of feedback from various distinguished computer scientists, technologists, civil libertarians and security researchers after 28C3. Within those fields, there is a widespread consensus that, all other things being equal, computers are more secure and society is better served when owners of computers can control what software runs on them.

Let’s examine for a moment what that would mean.

Most computers today are fitted with Trusted Platform Module. This is a secure co-processor mounted on the motherboard. The specification of TPMs are published, and an industry body certifies compliance with those specifications.

To the extent that the spec is good (and the industry body is diligent), it’s possible to be reasonably certain that you’ve got a real, functional, TPM in your computer that faithfully implements the spec.

How is the TPM secure? It contains secrets: cryptographic keys. But it’s also secure in that it’s designed to be tamper-evident. If you try to extract the keys from a TPM, or remove the TPM from a computer and replace it with a gimmicked one, it will be very obvious to the computer’s owner.

One threat to TPM is that a crook (or a government, police force or other adversary) might try to compromise your computer — tamper-evidence is what lets you know when your TPM has been fiddled with.

Another TPM threat-model is that a piece of malicious software will infect your computer

Now, once your computer is compromised this way, you could be in great trouble. All of the sensors attached to the computer—mic, camera, accelerometer, fingerprint reader, GPS—might be switched on without your knowledge. Off goes the data to the bad guys.

All the data on your computer (sensitive files, stored passwords and web history)? Off it goes to the bad guys—or erased.

All the keystrokes into your computer—your passwords!—might be logged. All the peripherals attached to your computer—printers, scanners, SCADA controllers, MRI machines, 3D printers— might be covertly operated or subtly altered.

Imagine if those “other peripherals” included cars or avionics. Or your optic nerve, your cochlea, the stumps of your legs.

When your computer boots up, the TPM can ask the bootloader for a signed hash of itself and verify that the signature on the hash comes from a trusted party. Once you trust the bootloader to faithfully perform its duties, you can ask it to check the signatures on the operating system, which, once verified, can check the signatures on the programs that run on it.

Ths ensures that you know which programs are running on your computer—and that any programs running in secret have managed the trick by leveraging a defect in the bootloader, operating system or other components, and not because a new defect has been inserted into your system to create a facility for hiding things from you.

This always reminds me of Descartes: he starts off by saying that he can’t tell what’s true and what’s not true, because he’s not sure if he really exists.

[descartes]

He finds a way of proving that he exists, and that he can trust his senses and his faculty for reason.

Having found a tiny nub of stable certainty on which to stand, he builds a scaffold of logic that he affixes to it, until he builds up an entire edifice.

Likewise, a TPM is a nub of stable certainty: if it’s there, it can reliably inform you about the code on your computer.

[crazy]

Now, you may find it weird to hear someone like me talking warmly about TPMs. After all, these are the technologies that make it possible to lock down phones, tablets, consoles and even some PCs so that they can’t run software of the owner’s choosing.

Jailbreaking” usually means finding some way to defeat a TPM or TPM-like technology. So why on earth would I want a TPM in my computer?

As with everything important, the devil is in the details.

Imagine for a moment two different ways of implementing a TPM:

1. Lockdown

[LOCKDOWN]

Your TPM comes with a set of signing keys it trusts, and unless your bootloader is signed by a TPM-trusted party, you can’t run it. Moreover, since the bootloader determines which OS launches, you don’t get to control the software in your machine.

2. Certainty

[CERTAINTY]

You tell your TPM which signing keys you trust—say, Ubuntu, EFF, ACLU and Wikileaks—and it tells you whether the bootloaders it can find on your disk have been signed by any of those parties. It can faithfully report the signature on any other bootloaders it finds, and it lets you make up your own damn mind about whether you want to trust any or all of the above.

Approximately speaking, these two scenarios correspond to the way that iOS and Android work: iOS only lets you run Apple-approved code; Android lets you tick a box to run any code you want. Critically, however, Android lacks the facility to do some crypto work on the software before boot-time and tell you whether the code you think you’re about to run is actually what you’re about to run.

It’s freedom, but not certainty.

In a world where the computers we’re discussing can see and hear you, where we insert our bodies into them, where they are surgically implanted into us, and where they fly our planes and drive our cars, certainty is a big deal.

This is why I like the idea of a TPM, assuming it is implemented in the “certainty” mode and not the “lockdown” mode.

If that’s not clear, think of it this way: a “war on general-purpose computing” is what happens when the control freaks in government and industry demand the ability to remotely control your computers

[1984]

The defenders against that attack are also control freaks—like me—but they happen to believe that device-owners should have control over their computers

[De Niro in Brazil]

Both sides want control, but differ on which side should have control.

Control requires knowledge. If you want to be sure that songs can only moved onto an iPod, but not off of an iPod, the iPod needs to know that the instructions being given to it by the PC (to which it is tethered) are emanating from an Apple-approved iTunes. It needs to know they’re not from something that impersonates iTunes in order to get the iPod to give it access to those files.

[Roach Motel]

If you want to be sure that my PVR won’t record the watch-once video-on-demand movie that I’ve just paid for, you need to be able to ensure that the tuner receiving the video will only talk to approved devices whose manufacturers have promised to honor “do-not-record” flags in the programmes.

[TiVo error]

If I want to be sure that you aren’t watching me through my webcam, I need to know what the drivers are and whether they honor the convention that the little green activity light is always switched on when my camera is running.

[Green light]

If I want to be sure that you aren’t capturing my passwords through my keyboard, I need to know that the OS isn’t lying when it says there aren’t any keyloggers on my system.

Whether you want to be free—or want to enslave—you need control. And for that, you need this knowledge.

That’s the coming war on general purpose computing. But now I want to investigate what happens if we win it.

We could face a interesting prospect. This I call the coming civil war over general purpose computing.

Let’s stipulate that a victory for the “freedom side” in the war on general purpose computing would result in computers that let their owners know what was running on them. Computers would faithfully report the hash and associated signatures for any bootloaders they found, control what was running on computers, and allow their owners to specify who was allowed to sign their bootloaders, operating systems, and so on.

[Revolutionary war victory image]

There are two arguments that we can make for this:

1. Human rights

If your world is made of computers, then designing computers to override their owners’ decisions has significant human rights implications. Today we worry that the Iranian government might demand import controls on computers, so that only those capable of undetectable surveillance are operable within its borders. Tomorrow we might worry about whether the British government would demand that NHS-funded cochlear implants be designed to block reception of “extremist” language, to log and report it, or both.

2. Property rights

The doctrine of first sale is an important piece of consumer law. It says that once you buy something, it belongs to you, and you should have the freedom to do anything you want with it, even if that hurts the vendor’s income. Opponents of DRM like the slogan, “You bought it, you own it.”

Property rights are an incredibly powerful argument. This goes double in America, where strong property rights enforcement is seen as the foundation of all social remedies.

[private property]

This goes triple for Silicon Valley, where you can’t swing a cat without hitting a libertarian who believes that the major — or only — legitimate function of a state is to enforce property rights and contracts around them.

Which is to say that if you want to win a nerd fight, property rights are a powerful weapon to have in your arsenal. And not just nerd fights!

That’s why copyfighters are so touchy about the term “Intellectual Property”. This synthetic, ideologically-loaded term was popularized in the 1970s as a replacement for “regulatory monopolies” or “creators’ monopolies” — because it’s a lot easier to get Congress to help you police your property than it is to get them to help enforce your monopoly.

[Human rights fist]

Here is where the civil war part comes in.

Human rights and property rights both demand that computers not be designed for remote control by governments, corporations, or other outside institutions. Both ensure that owners be allowed to specify what software they’re going to run. To freely choose the nub of certainty from which they will suspend the scaffold of their computer’s security.

Remember that security is relative: you are secured from attacks on your ability to freely use your music if you can control your computing environment. This, however, erodes the music industry’s own security to charge you some kind of rent, on a use-by-use basis, for your purchased music.

If you get to choose the nub from which the scaffold will dangle, you get control and the power to secure yourself against attackers. If the the government, the RIAA or Monsanto chooses the nub, they get control and the power to secure themselves against you.

In this dilemma, we know what side we fall on. We agree that at the very least, owners should be allowed to know and control their computers.

But what about users?

Users of computers don’t always have the same interests as the owners of computers— and, increasingly, we will be users of computers that we don’t own.

Where you come down on conflicts between owners and users is going to be one of the most meaningful ideological questions in technology’s history. There’s no easy answer that I know about for guiding these decisions.

[Blackstone on property]

Let’s start with a total pro-owner position: “property maximalism”.

• If it’s my computer, I should have the absolute right to dictate the terms of use to anyone who wants to use it. If you don’t like it, find someone else’s computer to use.

How would that work in practice? Through some combination of an initialization routine, tamper evidence, law, and physical control. For example, when you turn on your computer for the first time, you initialize a good secret password, possibly signed by your private key.

[Random number]

Without that key, no-one is allowed to change the list of trusted parties from which your computer’s TPM will accept bootloaders. We could make it illegal to subvert this system for the purpose of booting an operating system that the device’s owner has not approved. Such as law would make spyware really illegal, even moreso than now, and would also ban the secret installation of DRM.

We could design the TPM so that if you remove it, or tamper with it, it’s really obvious — give it a fragile housing, for example, which is hard to replace after the time of manufacture, so it’s really obvious to a computer’s owner that someone has modified the device, possibly putting it in an unknown and untrustworthy state. We could even put a lock on the case.

[computer that has had its lid ripped off]

I can see a lot of benefits to this, but there downsides, too.

[Self-driving car]

Consider self-driving cars. There’s a lot of these around already, of course, designed by Google and others. It’s easy to understand, how, on the one hand, self-driving cars are an incredibly great development. We are terrible drivers, and cars kill the shit out of us. It’s the number 1 cause of death in America for people aged 5-34.

[Mortality chart]

I’ve been hit by a car. I’ve cracked up a car. I’m willing to stipulate that humans have no business driving at all.

It’s also easy to understand how we might be nervous about people being able to homebrew their own car firmware. On one hand, we’d want the source to cars to be open because we’d want to subject it to wide scrutiny. On the other hand, it will be plausible to say, “Cars are safer if they use a locked bootloader that only trusts government-certified firmware”.

And now we’re back to whether you get to decide what your computer is doing.

But there are two problems with this solution:

First, it won’t work. As the copyright wars have shown up, firmware locks aren’t very effective against dedicated attackers. People who want to spread mayhem with custom firmware will be able to just that.

What’s more, it’s not a good security approach: if vehicular security models depend on all the other vehicles being well-behaved and the unexpected never arising, we are dead meat.

Self-driving cars must be conservative in their approach to their own conduct, and liberal in their expectations of others’ conduct.

[Defensive driving driver’s ed sign/scan]

This is the same advice you get in your first day of driver’s ed, and it remains good advice even if the car is driving itself.

Second, it invites some pretty sticky parallels. Remember the “information superhighway”?

Say we try to secure our physical roads by demanding that the state (or a state-like entity) gets to certify the firmware of the devices that cruise its lanes. How would we articulate a policy addressing the devices on our (equally vital) metaphorical roads—with comparable firmware locks for PCs, phones, tablets, and other devices?

After all, the general-purpose network means that MRIs, space-ships, and air-traffic control systems share the “information superhighway” with game consoles, Arduino-linked fart machines, and dodgy voyeur cams sold by spammers from the Pearl River Delta.

And consider avionics and power-station automation.

[Nuclear towers]

This is a much trickier one. If the FAA mandates a certain firmware for 747s, it’s probably going to want those 747s designed so that it and it alone controls the signing keys for their bootloaders. Likewise, the Nuclear Regulatory Commission will want the final say on the firmware for the reactor piles.

This may be a problem for the same reason that a ban on modifying car firmware is: it establishes the idea that a good way to solve problems is to let “the authorities” control your software.

But it may be that airplanes and nukes are already so regulated that an additional layer of regulation wouldn’t leak out into other areas of daily life — nukes and planes are subject to an extraordinary amount of no-notice inspection and reporting requirements that are unique to their industries.

Second, there’s a bigger problem with “owner controls”: what about people who use computers, but don’t own them?

This is not a group of people that the IT industry has a lot of sympathy for, on the whole.

[Encrufted desktop]

An enormous amount of energy has been devoted to stopping non-owning users from inadvertently breaking the computers they are using, downloading menu-bars, typing random crap they find on the Internet into the terminal, inserting malware-infected USB sticks, installing plugins or untrustworthy certificates, or punching holes in the network perimeter.

Energy is also spent stopping users from doing deliberately bad things, too. They install keyloggers and spyware to ensnare future users, misappropriate secrets, snoop on network traffic, break their machines and disable the firewalls.

There’s a symmetry here. DRM and its cousins are deployed by people who believe you can’t and shouldn’t be trusted to set policy on the computer you own. Likewise, IT systems are deployed by computer owners who believe that computer users can’t be trusted to set policy on the computers they use.

As a former sysadmin and CIO, I’m not going to pretend that users aren’t a challenge. But there are good reasons to treat users as having rights to set policy on computers they don’t own.

Let’s start with the business case.

When we demand freedom for owners, we do so for lots of reasons, but an important one is that computer programmers can’t anticipate all the contingencies that their code might run up against — that when the computer says yes, you might need to still say no.

This is the idea that owners possess local situational awareness that can’t be perfectly captured by a series of nested if/then statements.

It’s also where communist and libertarianis principles converge:

[Hayek]

• Friedrich Hayek thought that expertise was a diffuse thing, and that you were more likely to find the situational awareness necessary for good decisionmaking very close to the decision itself — devolution gives better results that centralization.

• Karl Marx believed in the legitimacy of workers’ claims over their working environment, saying that the contribution of labor was just as important as the contibution of capital, and demanded that workers be treated as the rightful “owners” of their workplace, with the power to set policy.

[Coalface]

For totally opposite reasons, they both believed that the people at the coalface should be given as much power as possible.

The death of mainframes was attended by an awful lot of concern over users and what they might do to the enterprise. In those days, users were even more constrained than they are today. They could only see the screens the mainframe let them see, and only undertake the operations the mainframe let them undertake.

When the PC and Visicalc and Lotus 1-2-3 appeared, employees risked termination by bringing those machines into the office— or by taking home office data to use with those machines.

Workers developed computing needs that couldn’t be met within the constraints set by the firm and its IT department, and didn’t think that the legitimacy of their needs would be recognized.

The standard responses would involve some combination of the following:

• Our regulatory compliance prohibits the thing that will help you do your job better.

• If you do your job that way, we won’t know if your results are correct.

• You only think you want to do that.

• It is impossible to make a computer do what you want it to do.

• Corporate policy prohibits this.

These may be true. But often they aren’t, and even when they are, they’re the kind of “truths” that we give bright young geeks millions of dollars in venture capital to falsify—even as middle-aged admin assistants get written up by HR for trying to do the same thing.

The personal computer arrived in the enterprise by the back door, over the objections of IT, without the knowledge of management, at the risk of censure and termination. Then it made the companies that fought it billions. Trillions.

Giving workers powerful, flexible tools was good for firms because people are generally smart and want to do their jobs well. They know stuff their bosses don’t know.

So, as an owner, you don’t want the devices you buy to be locked, because you might want to do something the designer didn’t anticipate.

And employees don’t want the devices they use all day locked, because they might want to do something useful that the IT dept didn’t anticipate.

This is the soul of Hayekism — we’re smarter at the edge than we are in the middle.

The business world pays a lot of lip service to Hayek’s 1940s ideas about free markets. But when it comes to freedom within the companies they run, they’re stuck a good 50 years earlier, mired in the ideology of Frederick Winslow Taylor and his “scientific management”. In this way of seeing things, workers are just an unreliable type of machine whose movements and actions should be scripted by an all-knowing management consultant, who would work with the equally-wise company bosses to determine the one true way to do your job. It’s about as “scientific” as trepanation or Myers-Briggs personality tests; it’s the ideology that let Toyota cream Detroit’s big three.

[GM v Toyota earnings]

So, letting enterprise users do the stuff they think will allow them to make more money for their companies will sometimes make their companies more money.

That’s the business case for user rights. It’s a good one, but really I just wanted to get it out of the way so that I could get down to the real meat: Human rights.

[Another Human Rights Now fist]

This may seem a little weird on its face, but bear with me.

Earlier this year, I saw a talk by Hugh Herr, Director of the Biomechatronics group at The MIT Media Lab. Herr’s talks are electrifying. He starts out with a bunch of slides of cool prostheses: Legs and feet, hands and arms, and even a device that uses focused magnetism to suppress activity in the brains of people with severe, untreatable depression, to amazing effect.

Then he shows this slide of him climbing a mountain. He’s buff, he’s clinging to the rock like a gecko. And he doesn’t have any legs: just these cool mountain climbing prostheses.

Herr looks at the audience from where he’s standing, and he says, “Oh yeah, didn’t I mention it? I don’t have any legs, I lost them to frostbite.”

He rolls up his trouser legs to show off these amazing robotic gams, and proceeds to run up and down the stage like a mountain goat.

The first question anyone asked was, “How much did they cost?”

He named a sum that would buy you a nice brownstone in central Manhattan or a terraced Victorian in zone one in London.

The second question asked was, “Well, who will be able to afford these?

To which Herr answered “Everyone. If you have to choose between a 40-year mortgage on a house and a 40-year mortgage on legs, you’re going to choose legs”

So it’s easy to consider the possibility that there are going to be people — potentially a lot of people — who are “users” of computers that they don’t own, and where those computers are part of their bodies.

[Cochlear implant]

Most of the tech world understands why you, as the owner of your cochlear implants, should be legally allowed to choose the firmware for them. After all, when you own a device that is surgically implanted in your skull, it makes a lot of sense that you have the freedom to change software vendors.

Maybe the company that made your implant has the very best signal processing algorithm right now, but if a competitor patents a superior algorithm next year, should you be doomed to inferior hearing for the rest of your life?

And what if the company that made your ears went bankrupt? What if sloppy or sneaky code let bad guys do bad things to your hearing?

These problems can only be overcome by the unambiguous right to change the software, even if the company that made your implants is still a going concern.

That will help owners. But what about users?

Consider some of the following scenarios:

• You are a minor child and your deeply religious parents pay for your cochlear implants, and ask for the software that makes it impossible for you to hear blasphemy.

• You are broke, and a commercial company wants to sell you ad-supported implants that listen in on your conversations and insert “discussions about the brands you love”.

• Your government is willing to install cochlear implants, but they will archive everything you hear and review it without your knowledge or consent.

Far-fetched? The Canadian border agency was just forced to abandon a plan to fill the nation’s airports with hidden high-sensitivity mics that were intended to record everyone’s conversations.

Will the Iranian government, or Chinese government, take advantage of this if they get the chance?

Speaking of Iran and China, there are plenty of human rights activists who believe that boot-locking is the start of a human rights disaster. It’s no secret that high-tech companies have been happy to build “lawful intercept” back-doors into their equipment to allow for warrantless, secret access to communications. As these backdoors are now standard, the capability is still there even if your country doesn’t want it.

In Greece, there is no legal requirement for lawful intercept on telcoms equipment.

During the 2004/5 Olympic bidding process, an unknown person or agency switched on the dormant capability, harvested an unknown quantity of private communications from the highest level, and switched it off again

Surveillance in the middle of the network is nowhere near as interesting as surveillance at the edge. As the ghosts of Messrs Hayek and Marx will tell you, there’s a lot of interesting stuff happening at the coal-face that never makes it back to the central office.

Even “democratic” governments know this. That’s why the Bavarian government was illegally installing the “bundestrojan” — literally, state-trojan — on peoples’ computers, gaining access to their files and keystrokes and much else besides. So it’s a safe bet that the totalitarian governments will happily take advantage of boot-locking and move the surveillance right into the box.

You may not import a computer into Iran unless you limit its trust-model so that it only boots up operating systems with lawful intercept backdoors built into it.

Now, with an owner-controls model, the first person to use a machine gets to initialize the list of trusted keys and then lock it with a secret or other authorization token. What this means is that the state customs authority must initialize each machine before it passes into the country.

Maybe you’ll be able to do something to override the trust model. But by design, such a system will be heavily tamper-evident, meaning that a secret policeman or informant can tell at a glance whether you’ve locked the state out of your computer. And it’s not just repressive states, of course, who will be interested in this.

Remember that there are four major customers for the existing censorware/spyware/lockware industry: repressive governments, large corporations, schools, and paranoid parents.

[Kid-tracking software]

The technical needs of helicopter mums, school systems and enterprises are convergent with those of the governments of Syria and China. They may not share ideological ends, but they have awfully similar technical means to those ends.

We are very forgiving of these institutions as they pursue their ends; you can do almost anything if you’re protecting shareholders or children.

For example, remember the widespread indignation, from all sides, when it was revealed that some companies were requiring prospective employees to hand over their Facebook login credentials as a condition of employment?

These employers argued that they needed to review your lists of friends, and what you said to them in private, before determining whether you were suitable for employment.

[Urine-tests]

Facebook checks are the workplace urine test of the 21st century. They’re a means of ensuring that your private life doesn’t have any unsavoury secrets lurking in it, secrets that might compromise your work.

The nation didn’t buy this. From senate hearings to newspaper editorials, the country rose up against the practice.

But no one seems to mind that many employers routinely insert their own intermediate keys into their employees’ devices — phones, tablets and computers. This allows them to spy on your Internet traffic, even when it is “secure”, with a lock showing in the browser.

It gives your employer access to any sensitive site you access on the job, from your union’s message board to your bank to Gmail to your HMO or doctor’s private patient repository. And, of course, to everything on your Facebook page.

There’s wide consensus that this is OK, because the laptop, phone and tablet your employer issues to you are not your property. They are company property.

And yet, the reason employers give us these mobile devices is because there is no longer any meaningful distinction between work and home.

Corporate sociologists who study the way that we use our devices find time and again that employees are not capable of maintaining strict divisions between “work” and “personal” accounts and devices.

[Desktop covered in mobile devices]

America is the land of the 55-hour work-week, a country where few professionals take any meaningful vacation time, and when they do get away for a day or two, take their work-issued devices with them.

Even in traditional workplaces, we recognized human rights. We don’t put cameras in the toilets to curtail employee theft. If your spouse came by the office on your lunch break and the two of you went into the parking lot so that she or he could tell you that the doctor says the cancer is terminal, you’d be aghast and furious to discover that your employer had been spying on you with a hidden mic.

But if you used your company laptop to access Facebook on your lunchbreak, wherein your spouse conveys to you that the cancer is terminal, you’re supposed to be OK with the fact that your employer has been running a man-in-the-middle attack on your machine and now knows the most intimate details of your life.

There are plenty of instances in which rich and powerful people — not just workers and children and prisoners — will be users instead of owners.

Every car-rental agency would love to be able to lo-jack the cars they rent to you; remember, an automobile is just a computer you put your body into. They’d love to log all the places you drive to for “marketing” purposes and analytics.

There’s money to be made in finagling the firmware on the rental-car’s GPS to ensure that your routes always take you past certain billboards or fast-food restaurants.

[burger]

But in general, the poorer and younger you are, the more likely you are to be a tenant farmer in some feudal lord’s computational lands. The poorer and younger you are, the more likely it’ll be that your legs will cease to walk if you get behind on payments.

What this means is that any thug who buys your debts from a payday lender could literally — and legally — threaten to take your legs (or eyes, or ears, or arms, or insulin, or pacemaker) away if you failed to come up with the next installment.

[Slimy collection notice]

Earlier, I discussed how an owner override would work. It would involve some combination of physical access-control and tamper-evidence, designed to give owners of computers the power to know and control what bootloader and OS was running on their machine.

How would a user-override work? An effective user-override would have to leave the underlying computer intact, so that when the owner took it back, she could be sure that it was in the state she believed it to be in. In other words, we need to protect users from owners and owners from users.

Here’s one model for that:

Imagine that there is a bootloader that can reliably and accurately report on the kernels and OSes it finds on the drive. This is the prerequisite for state/corporate-controlled systems, owner-controlled systems, and user-controlled systems.

Now, give the bootloader the power to suspend any running OS to disk, encrypting all its threads and parking them, and the power to select another OS from the network or an external drive.

[Internet cafe]

Say I walk into an Internet cafe, and there’s an OS running that I can verify. It has a lawful interception back-door for the police, storing all my keystrokes, files and screens in an encrypted blob which the state can decrypt.

I’m an attorney, doctor, corporate executive, or merely a human who doesn’t like the idea of his private stuff being available to anyone who is friends with a dirty cop.

So, at this point, I give the three-finger salute with the F-keys. This drops the computer into a minimal bootloader shell, one that invites me to give the net-address of an alternative OS, or to insert my own thumb-drive and boot into an operating system there instead.

[Three finger salute]

The cafe owner’s OS is parked and I can’t see inside it. But the bootloader can assure me that it is dormant and not spying on me as my OS fires up. When it’s done, all my working files are trashed, and the minimal bootloader confirms it.

This keeps the computer’s owner from spying on me, and keeps me from leaving malware on the computer to attack its owner.

There will be technological means of subverting this, but there is a world of difference between starting from a design spec that aims to protect users from owners (and vice-versa) than one that says that users must always be vulnerable to owners’ dictates.

Fundamentally, this is the difference between freedom and openness — between free software and open source.

Now, human rights and property rights often come into conflict with one another. For example, landlords aren’t allowed to enter your home without adequate notice. In many places, hotels can’t throw you out if you overstay your reservation, provided that you pay the rack-rate for the rooms — that’s why you often see these posted on the back of the room-door

Reposession of leased goods — cars, for example — are limited by procedures that require notice and the opportunity to rebut claims of delinquent payments.

When these laws are “streamlined” to make them easier for property holders, we often see human rights abuses. Consider robo-signing eviction mills, which used fraudulent declarations to evict homeowners who were up to date on their mortgages—and even some who didn’t have mortgages.

The potential for abuse in a world made of computers is much greater: your car drives itself to the repo yard. Your high-rise apartment building switches off its elevators and climate systems, stranding thousands of people until a disputed license payment is settled.

Sounds fanciful? This has already happened with multi-level parking garages.

Back in 2006, a 314-car Robotic Parking model RPS1000 garage in Hoboken, New Jersey, took all the cars in its guts hostage, locking down the software until the garage’s owners paid a licensing bill that they disputed.

They had to pay it, even as they maintained that they didn’t owe anything. What the hell else were they going to do?

And what will

you

do when your dispute with a vendor means that you go blind, or deaf, or lose the ability to walk, or become suicidally depressed?

[Phrenology bust]

The negotiating leverage that accrues to owners over users is total and terrifying.

Users will be strongly incentivized to settle quickly, rather than face the dreadful penalties that could be visited on them in the event of dispute. And when the owner of the device is the state or a state-sized corporate actor, the potential for human rights abuses skyrockets.

This is not to say that owner override is an unmitigated evil. Think of smart meters that can override your thermostat at peak loads.

[Smart meter]

Such meters allow us to switch off coal and other dirty power sources that can be varied up at peak times.

[Dirty coal]

But they work best if users — homeowners who have allowed the power-company to install a smart-meter — can’t override the meters. What happens when griefers, crooks, or governments trying to quell popular rebellion use this to turn heat off during a hundred year storm? Or to crank heat to maximum during a heat-wave?

The HVAC in your house can hold the power of life and death over you — do we really want it designed to allow remote parties to do stuff with it even if you disagree?

The question is simple. Once we create a design norm of devices that users can’t override, how far will that creep?

Especially risky would be the use of owner override to offer payday loan-style services to vulnerable people: Can’t afford artificial eyes for your kids? We’ll subsidize them if you let us redirect their focus to sponsored toys and sugar-snacks at the store.

Foreclosing on owner override, however, has its own downside. It probably means that there will be poor people who will not be offered some technology at all.

If I can lo-jack your legs, I can lease them to you with the confidence of my power to repo them if you default on payments. If I can’t, I may not lease you legs unless you’ve got a lot of money to begin with.

But if your legs can decide to walk to the repo-depot without your consent, you will be totally screwed the day that muggers, rapists, griefers or the secret police figure out how to hijack that facility.

[TV remote, labelled “legs” “arms” etc]

It gets even more complicated, too, because you are the “user” of many systems in the most transitory ways: subway turnstiles, elevators, the blood-pressure cuff at the doctor’s office, public buses or airplanes. It’s going to be hard to figure out how to create “user overrides” that aren’t nonsensical. We can start, though, by saying a “user” is someone who is the

sole

user of a device for a certain amount of time.

This isn’t a problem I know how to solve. Unlike the War on General Purpose Computers, the Civil War over them presents a series of conundra without (to me) any obvious solutions.

These problems are a way off, and they only arise if we win the war over general purpose computing first

But come victory day, when we start planning the constitutional congress for a world where regulating computers is acknowledged as the wrong way to solve problems, let’s not paper over the division between property rights and human rights.

This is the sort of division that, while it festers, puts the most vulnerable people in our society in harm’s way. Agreeing to disagree on this one isn’t good enough. We need to start thinking now about the principles we’ll apply when the day comes.

If we don’t start now, it’ll be too late.

Video: Google. Photos: Cory Doctorow. Layout: Rob Beschizza

Exit mobile version