Saturday, February 21, 2009

The Three Laws and Then Some

Asimov had one critical failure in imagination when dreaming up his future worlds inhabited by robots, or maybe he was just giving humanity too much credit. He completely missed the possibility that the greatest initial demand for highly autonomous robots would come from the military. He may not have considered them, but the military is certainly not ignoring him. The Office of Naval Research has released the first full military report on the potential ethical considerations of using semi- and fully-autonomous robots in combat, "Autonomous Military Robotics: Risk, Ethics, and Design" (.pdf). It's fascinating, if slightly disconcerting, but it's an important topic. Congress has mandated that 1/3 of deep-strike aircraft be automated by 2010, and the same portion of ground vehicles by 2015. Whether we decide to allow a gun-toting robot to pull its own trigger, questions of robot ethics are important for more mundane 'bots, too. The first fully-autonomous ground vehicles will likely be transports for convoy operations. Just autonomous trucks. But how will the truck react if a child runs into the street? What if there are crowds on the sidewalks? It seems straightforward that robots would be programmed to be pure utilitarians; after all, we can expect a computer to successfully calculate the course of action leading to the least human suffering, at any rate far better than a human operator can. Then again, human cost can't be the only consideration: if autonomous trucks can safely be forced to crash simply by jumping in front of them, they wouldn't serve their purpose very well. Any fully-autnomous robot bigger than a Roomba might have to make a life-or-death decision; we need to be ready to face the implications of that.

2 comments:

Shane said...

Robots have always fascinated me. Thanks for the link.

I've always felt that Asimov's 3 laws were totally inadequate, and far too crude to be realistic (or even desirable). The ambiguity of the word "harm" renders Asimov's laws useless without further clarification. His own stories of course investigated the inadequacies of his laws, like the one where robots kept trying to save humans from minute amounts of radiation. And I'm not familiar with any story of his that deals with probabilities of harm or statistical harm. Imagine such a robot trying to make a decision about whether to administer a risky cancer treatment or an invasive surgery.

The Washington Post had a good story a few months back about the psychological attachments that soldiers develop towards their robots titled Bots on the Ground.

The real danger with weaponized AI is that a) AI is almost by definition unpredictable (which is what makes it interesting and useful) and b) AIs can be designed to be "smarter" than their creators. See chess-playing algorithms. I suppose any weaponized AI would have to have a separate, dumb kill switch. It'd probably need dumb weapons with human authentication for use, too. I'd trust the NSA to be able to design well-implemented authentication algorithms, but only barely.

And how can we have any discussion of weaponized AI without mentioning xkcd's commentary on genetic algorithms?

Elephantschild said...

If we all have our DNA encoded transponder clip implanted properly, the robot will be able to tell friendlies from the bad guys without any problems.