The Pentagon is seeking bids to improve its Advanced Targeting and Lethality Automated System (ATLAS) so that it can "acquire, identify, and engage targets at least 3X faster than the current manual process."
When this public tender sparked concern that the Pentagon's autonomous tanks were gaining automated targeting and firing capabilities — that is, that they would be autonomous killbots — the Pentagon updated the tender to reassure critics that "development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09" and that "All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards."
Why does any of this matter? The Department of Defense Directive 3000.09, requires that humans be able to “exercise appropriate levels of human judgment over the use of force,” meaning that the U.S. won’t toss a fully autonomous robot into a battlefield and allow it to decide independently whether to kill someone. This safeguard is sometimes called being “in the loop,” meaning that a human is making the final decision about whether to kill someone.
U.S. Army Assures Public That Robot Tank System Adheres to AI Murder Policy [Matt Novak/Gizmodo]
(via JWZ)