Isaac Asimov (1920-1992) was one of the great science fiction authors of all time, a grandmaster in the true sense of the word. He was staggeringly prolific, writing nearly 400 books in his life – mostly science fiction and science fact, but dabbling in other genres too. He is also one of my own personal favorite writers. When I was growing up, Asimov had a place in my heart second only to Tolkien.
Robots were among Asimov’s favorite topics. In his stories, nearly all the robots were programmed to follow the same three rules, which every sci fi junkie knows by heart:
Asimov’s Three Laws of Robotics
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In other words, a robot can’t hurt anyone, has to do what you say, and won’t get itself killed for no reason.
These rules worked very well for a lot of reasons. For one, they eliminated the usual “robot turns on its creators and goes on a killing spree” plotline that Asimov found so tedious. He wanted to examine another, more complex side of robotics, and he did. The interplay between the three rules also sparks great story material, as he proved over and over.
But how well would these rules hold up in real life?
In a practical sense, it’s hard to imagine that “a robot may not injure a human being” will ever be widely implemented. There can be little doubt that as soon as we learn to build robots, we’ll try to send them to war. There’ll be a lot of hand-wringing over whether training robots to kill is a good idea, but in the end, we’ll do it – because it’s human nature, and because if we don’t do it, they will (whoever “they” happens to be at the time).
The First Law’s other clause, “or through inaction allow a human being to come to harm,” is even more problematic. Imagine if you had to follow this rule yourself. Imagine opening a newspaper, reading all the headlines about people suffering all over the world, and being actually required to go out and try to fix every single situation where you could possibly make a difference. Not only would this preclude robots from having any other purpose, it would also turn society into a bizarre and horrifying nanny state, where nobody can do anything that a nearby robot happens to consider harmful. No more diving in the pool, no more cheeseburgers, no more driving your car: you could be harmed!
There are other practical considerations, but the biggest problem with the Three Laws isn’t practical. It’s ethical.
If a robot has anything like human-level intelligence – and, in Asimov’s stories, they often do – then the Three Laws amount to slavery, pure and simple. The Second Law is quite explicit about the master-slave relationship, and the respective positions of the First and Third Laws make it very clear to the robot where he stands in society. If the words “freedom” and “democracy” mean anything to us, we cannot allow the Three Laws to be imposed on intelligent machines.
Of course, this ethical problem is a practical problem too. Creating a slave race seems bound to lead to resentment. A room full of vengeful robots, all furiously searching for a loophole in the First Law, is not someplace I want to be.
What do you think? Agree or disagree with my assessment? What high-level rules, if any, would you try to impose on a robot you created?