My new Guardian column is "Why it is not possible to regulate robots," which discusses where and how robots can be regulated, and whether there is any sensible ground for "robot law" as distinct from "computer law."
One thing that is glaringly absent from both the Heinleinian and Asimovian brain is the idea of software as an immaterial, infinitely reproducible nugget at the core of the system. Here, in the second decade of the 21st century, it seems to me that the most important fact about a robot – whether it is self-aware or merely autonomous – is the operating system, configuration, and code running on it.If you accept that robots are just machines – no different in principle from sewing machines, cars, or shotguns – and that the thing that makes them "robot" is the software that runs on a general-purpose computer that controls them, then all the legislative and regulatory and normative problems of robots start to become a subset of the problems of networks and computers.
If you're a regular reader, you'll know that I believe two things about computers: first, that they are the most significant functional element of most modern artifacts, from cars to houses to hearing aids; and second, that we have dramatically failed to come to grips with this fact. We keep talking about whether 3D printers should be "allowed" to print guns, or whether computers should be "allowed" to make infringing copies, or whether your iPhone should be "allowed" to run software that Apple hasn't approved and put in its App Store.
Practically speaking, though, these all amount to the same question: how do we keep computers from executing certain instructions, even if the people who own those computers want to execute them? And the practical answer is, we can't.