Monday, November 22, 2010

The Future of Robotics

There seems to be a few different forms that robotics could take. Currently there are humanoid robots (like ASIMO), modular robots, educational toy robots, and sports-related robots (hopefully culminating in real-life Cyberball at some point). I find that the path that most robotics research seems to take (or maybe this is just the type of robots that get the most attention in the media) seems to be the humanoid type of robot. A robot that is meant to be able to mimic or interpret human facial expressions, with the desire being to one day be capable of real human-like emotions. Robots are being built to imitate human expressions, to think and to respond to stimulus. At some point, we will develop robots that will able to not only see, hear, touch, and smell, but also feel a range of emotions.


Many robotics engineers have being influenced in their programming by the writings of science fiction author Isaac Asimov. Asimov is famous for creating a series of laws that must be developed some time in the future for the governance of robotic emotions and behavior patterns. Initially, Asimov had three laws, which can be summarized as follows: 
 

First Law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In his science fiction writings Asimov wrote many short stories that played upon the "loopholes" that existed in his original three laws of robotics. Even as early as 1950, Asimov was making changes to the laws of robotics. By 1985, Asimov had changed the laws of robotics once again to reflect these changes. This new set of laws is listed below:

Asimov's Revised Laws of Robotics (1985)
Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate the Zeroth Law of Robotics.
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the Zeroth or First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First, or Second Law.
These laws were further modified to an extended set of laws.

An Extended Set of the Laws of Robotics
The Meta-Law A robot may not act unless its actions are subject to the Laws of Robotics
Law Zero A robot may not injure humanity, or, through inaction, allow humanity to come to harm
Law One A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher-order Law
Law Two
  • A robot must obey orders given it by human beings, except where such orders would conflict with a higher-order Law.
  • A robot must obey orders given it by superordinate robots, except where such orders would conflict with a higher-order Law 
Law Three
  • A robot must protect the existence of a superordinate robot as long as such protection does not conflict with a higher-order Law
  • A robot must protect its own existence as long as such protection does not conflict with a higher-order Law
Law Four
A robot must perform the duties for which it has been programmed, except where that would conflict with a higher-order law
The Procreation Law
A robot may not take any part in the design or manufacture of a robot unless the new robot's actions are subject to the Laws of Robotics

So what's the point of all of these "laws"? They do serve as a guide for robotics engineers when they are programming their robots, that they must think of humanity as a whole. Science fiction definitely does have its share of pessimism with regard to robots and their capacity to be beneficial to mankind. Many science fiction storylines deal with the "flaws of humanity" and the people responsible for creating the machines being flawed because their creators have passed on their flaws to their creations. We also have to think of the social implications of robots that have emotions. Will these future robots need or demand the same rights that humans enjoy? Again science fiction has dealt with these issues, and when we were discussing this in class last week I remembered an episode of Star Trek: The Next Generation that dealt with this very issue. As you may or may not know, in this Star Trek series there was a character named Data, who was a sentient android that was created by a scientist named Dr. Noonien Soong. In the episode "The Measure of a Man" they held court proceedings to determine Data's legal rights as there was a scientist that wanted to disassemble Data to learn how he worked and attempt to duplicate Dr. Soong's work on the "positronic brain" (you got to love Trek-jargon). The arguments in the episode tended to center around Whether or not Data was Starfleet property or whether he should enjoy rights as an autonomous individual. Although it was more like a "courtroom drama" and an episode that was probably really cheap to make (since there were no special effects per se in the episode) it is one my favorites from that series because of the ideas and issues that are raised in it. I'm posting the entire episode below from YouTube.
In Part 1, you can skip to about the 5 minute mark, and not miss too much.



Here is Part 2 of the episode:


Part 3:


Part 4:


Part 5:

No comments:

Post a Comment