Friday, July 1, 2011

Robopocalypse



Daniel H. Wilson was recently interviewed by on Science Friday by Ira Flatow about his recent book Robopocalypse. He also has a previous book called How to Survive a Robot Uprising.

The premise of the book is that every machine with a computer chip goes out of control and tries to take over humanity. This is supposed to be in the near future, not much more advanced than now.

I've always wondered, when watching stories like Battlestar Galactica or the Terminator series, why the robots would want to wipe out people. What about their own self interest? Without humans, they wouldn't have more power through the innovation that humans provide. It seemed to me that robots would not be able to figure out new things very easily.

This made me think about the Robot Soul. What is the heart of a robot's being? What makes them tick? It seems like you would build in a willingness to do dangerous things and in exchange, they can have a new body if they fail. You constantly keep a copy of the robot's program, and if any particular robot gets destroyed in the process of carrying out their duty, you simply download that robot's program into a new body. You could also clone the program into multiple new robots, like having a child. For humans, the promise of never dying would be like immortality and appealing. The hope of being copied would be like procreation. Robots would have two of the things that humans want and strive for.

Years ago, Issac Asimov wrote about robotics and came up with the three laws of robotics. The gist of these rules was that a robot could do no harm to humans. This was commonly accepted by many to be a precondition of developing robots, that there would be some kind of safeguard built in. Yet today, we work on military robots to take the place of soldiers and program them to kill our enemy. This seems like a terrible idea to me. What if this technology was turned against us by our enemies? What if this technology grew aware and developed a conscience and decided that being used to kill a person's enemy was not right? What if they selectively turned against any person that ordered the robot to kill someone else? What if robots decided not to let humans order them to kill other humans?

Solution would probably be Avatars. The killing machines would be robots that would operated normally most of the time, then cede control when it was time to kill a person. At that time, humans would supply the controlling commands through a virtual interface, relieving robots of guilt or getting around the prohibition of killing humans. You have to figure we'll find a way around any restrictions if we feel we need that capability.

No comments: