h+ Magazine has a fascinating interview with Dr. Ronald Arkin, the director of Georgia Tech's Mobile Robot Lab who literally wrote the book on the ethics of robots that kill. The book, titled Governing Lethal Behavior in Autonomous Robots, lays out Arkin's research across law, philosophy, military ethics, and engineering to address dilemmas we'll face in the future as we build even more complex killing machines. From h+:
h+: How does the process of introducing moral robots onto the battlefield get bootstrapped and field tested to avoid serious and potentially lethal "glitches" in the initial versions of the ethical governor? What safeguards should be in place to prevent accidental war?
RA: Verification and validation of software and systems is an integral part of any new battlefield system. It certainly must be adhered to for moral robots as well. What exactly the metrics are and how they can be measured for ethical interactions during the course of battle is no doubt challenging, but one I feel can be met if properly studied. It likely would involve the military's battle labs, field experiments, and force-on-force exercises to evaluate the effectiveness of the ethical constraints on these systems prior to their deployment, which is fairly standard practice. The goal is not to erode mission effectiveness, while reducing collateral damage.A harder problem is managing the changes and tactics that an intelligent adaptive enemy would use in response to the development of these systems… to avoid spoofing and ruses that could take advantage of these ethical restraints in a range of situations. This can be minimized, I believe, by the use of bounded morality –- limiting their deployment to narrow, tightly prescribed situations, and not for the full spectrum of combat.
"Teaching Robots the Rules of War" (h+, thanks RU Sirius!)