7. The Future of Robots

Isaac Asimov (c. January 2, 1920 – April 6, 1992), was an American author and professor of biochemistry, best known for his works of science fiction and for his popular science books. Asimov was one of the most prolific writers of all time, having written or edited more than 500 books and an estimated 9,000 letters and postcards. His works have been published in nine of the ten major categories of the Dewey Decimal System. (Wikipedia)

Isaac Asimov was one of the first who thought about the problems and implications robots could give us in the future. He pictures them as intelligent creatures equal (or superior) to humans. He also developed a set of laws people could use to make sure we keep control over the robots and make sure the robots won’t harm anyone or any other robot. Asimov once wrote that between 2003 and 2007 most world governments would turn against robots. This is of course not the case, robots aren’t as well developed as he thought back then. These days robots aren’t capeable thinking and acting themselve (of course, robots can perform tasks themselves, but they can never operate long without human guidance and assistance.

The 4 laws of robotica Asimov made;

Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate the Zeroth Law of Robotics.
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the Zeroth or First law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First or Second law.
The Zeroth law is called this because it was added by Asimov later. In his early scenarios the robots were uncapable to do so. Today this law is still quit out of reach, for humans it is almost impossible to do so, let alone robots.

The first law is ofcourse also to protect humans. In fact, it is illegal to produce a robot made to harm other humans. This is why al robots (in car industries for example) are equipped with many safety measures to protect people. Most robots have sensors to protect people who come to close. The first law also says that through inaction he must protect humans. This is fairly impossible today because this means that a robot must think and analyse a situation and then interact if necessary. You could say that some robots do have a way of doing this. Some computers have a built-in device that makes sure that when the computer runs out of battery, it will automatically switch to it’s own power supply, and therefore there will be no loss of information.

The second law is also not really important these days. Robots today need instructions of humans otherwise they won’t be able to perform actions. Therefor this law is logical today. There are axamples of things that will react to a human order in an other way than the human wanted. They do this to meet the standards of the first law. Mercedes has a built in system that when a driver taps the break real quick, it will automatically break longer en faster because the machine thinks the human wants to slow down really fast, eventhough he does not break at full power, the machine does this for the human. Here the car protects the human eventhough the human had given different orders.

The third law is today perhaps more important than law one and two. Many robots have a built-in system that makes sure that when they run out of battery, they will return to their charging station or they will send out signals that the robot is nearly exhausted. A Macbook goes to ‘sleep’ when the battery is low, therefor saving information and making sure that it will last long enough for the user to reconnect it to power.

The future of emotional machines and robots; implications and ethical issues

There are a lot of ethical and moral implications about the development of smart robots. What will happen when robots have emotions to and people will get strong emotional bonds with a robot?

Robots with emotions can provide us with a lot of oppurtunities. But with those oppurtunities also come a lot of dangers. When this happens, it is extremely important that humans will always stay in control. If robots become smarter and develop emotions, will they, for example, take over the job of teachers? Probably not, but they will complement them. There are situations when robots can be better teachers than humans. When someone is travelling or in some distant location, they will be able to learn from robotic teachers. Another problem with the way teaching works today is that often a class of students is listening to a teacher who's telling them things they don't really care about, or things they're not interested in at the moment. It is proven that a student learns much better when he has to do things himself. Make mistakes, struggle with the things he has to learn or understand. People always learn things faster and better when they really care about something and when they can see, feel and hear the things they have to learn. Robots can make this easier in a way that they have access to all the information in the world, and they can keep the students interested everytime. Human teachers will still be necessary, but more in a supporting and constructive way.

In other areas, robots can be an excellent replacement of humans. Some jobs are dangerous for humans, so these jobs can be performed by robots to save peoples lives. But on the other hand, things like robbery, murder and terrorism can also be performed by smart thinking robots.
The smarter a robot gets, the more he will be able to do. Therefor more and more jobs wil be taken over by robots and thus making more people unemployed. It allready happend with a lot of work in factories, a lot of things have been taking over by robots putting cars together etc. On a small scale this is not a big problem, but when they can take over work of large amounts of people, then this becomes a serious issue.

When robots do get more emotions, people will also get more and more attached to them. Today there are some robot pets for sale, and some people allready show strong emotional bonds with them, eventhough there is extremely little emotion in them. This will also raise questions about the responsebility of the robots. If a dog bites someone, it's his owners fault. If a robot dog would bite someone, is it still the owners fault? Or is the robots fault? Or the manufacturer?