Asimov had one critical failure in imagination when dreaming up his future worlds inhabited by robots, or maybe he was just giving humanity too much credit. He completely missed the possibility that the greatest initial demand for highly autonomous robots would come from the military. He may not have considered them, but the military is certainly not ignoring him. The Office of Naval Research has released the first full military report on the potential ethical considerations of using semi- and fully-autonomous robots in combat, "Autonomous Military Robotics: Risk, Ethics, and Design" (.pdf). It's fascinating, if slightly disconcerting, but it's an important topic. Congress has mandated that 1/3 of deep-strike aircraft be automated by 2010, and the same portion of ground vehicles by 2015. Whether we decide to allow a gun-toting robot to pull its own trigger, questions of robot ethics are important for more mundane 'bots, too. The first fully-autonomous ground vehicles will likely be transports for convoy operations. Just autonomous trucks. But how will the truck react if a child runs into the street? What if there are crowds on the sidewalks? It seems straightforward that robots would be programmed to be pure utilitarians; after all, we can expect a computer to successfully calculate the course of action leading to the least human suffering, at any rate far better than a human operator can. Then again, human cost can't be the only consideration: if autonomous trucks can safely be forced to crash simply by jumping in front of them, they wouldn't serve their purpose very well. Any fully-autnomous robot bigger than a Roomba might have to make a life-or-death decision; we need to be ready to face the implications of that.