Saturday, November 27, 2010

Robotics Laws



Anyone who has seen a Terminator movie or knows the old Battlestar Galactica story line knows that there is a fear of our own technology taking over or trying to destroy humanity. It's a common theme and makes for a great story line. Dune is another story that fits this bill. The original stories were in a future over 12,000 years from now, where the government was feudal and the technology was advanced in some respects and stagnant in others. The Dune story line was extended backwards after Frank Herbert died by his son. They took it back to the often mentioned Butlerian Jihad, where computers were banned. Underneath this over 30 year old novel was a baseline story about humanity after surviving a war with machines. If you continue to read the saga as it ran forward in time from the original story, you also see that the war was never really ended, just delayed for millenia.

Isaac Asimov, arguably the most prolific science fiction writer of all time, started writing short stories about robots in the 1940's and accumulated them into his classic novel in 1950. This was before computers that he was speculating that eventually we would develop autonomous machines that were self-aware. He wasn't the first. A Czech playwriter had coined the word about 20 years previously, from a root Slavic word for worker or serf, although the story you commonly hear is that the root word is slave. When Asimov attempts to flesh out the idea, he thinks it through very thoroughly, as only he could do. What if we build these thinking machines and we lose control of them? We could certainly make them more powerful or faster than a human, so how would we ever regain control once we lost it? He speculated that the human designers, in order to build in features that would prevent the loss control, would develop laws that would be hard wired into robots to prevent them from harming humans. These laws of robotics are well known to most Sci-Fi geeks and Tech-Heads. They are:

1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

This would seem to insure that Robots stay "in their place", as willing and pliant servants of humans. I always thought that this was a little naive. What do you do about the rogue designer that decided that these rules were not good and didn't want to use them?

Well, it seems that more and more, that time is approaching. The use of drones in combat has increased markedly in the last 7 years since the start of the wars in Afghanistan and Iraq. In the beginning, these were remotely piloted vehicles that would simply remove the operator from the airplane, but still have a person in control. As time has passed, more and more of the functions of these drones have been getting automated. Exactly how autonomous they are is a military secret, so the question is whether or not we are already at the point of having autonomous machines, probably not self-aware, but capable of action independent of human control. We know the drones were originally going to be used only for surveillance and now carry weapons. We see the use of other "robots" on the battlefield, machines used for surveillance or bomb handling. These machines are probably more primitive in the way of computer intelligence and control, but early stages of work toward mechanizing the battlefield.

With the prospect of human deaths being less and less acceptable to the public, we see more moves to put machines in harms way in lieu of humans. There are already attempts to build personal vehicles that are little more than mechanical assists for a human form. That easily develops into a robot body, which you might have a person inside of, or you might control remotely. Given manpower shortages for just about any task you could conceive on a battlefield, how long before you provide these robots with weapons system and computer controls and allow them to operate independently. Of course, their job would be to kill people, so the Laws of Robotics as Asimov envisioned them would be the farthest thing from the designer's mind.

I recently hear John Hodgeson make a comment about the inevitable robot uprising. While he mentioned it humorously, the matter of fact way he slipped it into the conversation as if it was inevitable is disturbing, if you think about it. Most religions have a future end-time built into their narrative. The devout and faithful often believe that things will end badly for humankind, they just don't know when. I've always felt that thinking like this was in some ways wishful thinking, and in other ways a guarantee of self-fulfilling prophecy. It doesn't matter if you don't want it to happen if you believe it will happen. Most likely, your actions once you believe in something will make it more likely to occur.

The actions of our government if they pursue combat robots virtually insures that we are heading in that direction. You would tend to build a strong self-preservation instinct into your combat robot, or else it wouldn't be much of a fighter. People would naturally try to hack into the controlling program to shut down the robot, that's a natural response to the threat. The defense would be to harder a robot's ability to resist being shut down or averted from their mission - a recipe for loss of control. If one nation developed combat robots, it would virtually insure that their enemies would try to develop them. In order to be competitive in some future theater where one country is sending its combat robots against another country with the same kind of robots, designers would tend to make the robots tougher and more brutal than their enemy's forces. Again, a perfect recipe for disaster.

Do I sound like a technological armageddonist? Well, I don't think I am, because I don't think this will happen. I believe that we usually error on the side of sanity, and societal checks and balances tend to be employed before most situations get out of hand. But the first step in creating a check or balance to an unstable situation is to realize how that situation could go out of control. Murphy was an engineer, after all. So our challenge here is to realize the worst consequences of our actions and provide regulation and oversight for groups working on this kind of technology. Also, it doesn't hurt to develop a strong defense. Anyone seen the EMP device in the Matrix series?

No comments: