AI Week, Day 4: Asimov’s Three Laws

Isaac Asimov (1920-1992) was one of the great science fiction authors of all time, a grandmaster in the true sense of the word. He was staggeringly prolific, writing nearly 400 books in his life – mostly science fiction and science fact, but dabbling in other genres too. He is also one of my own personal favorite writers. When I was growing up, Asimov had a place in my heart second only to Tolkien.

Robots were among Asimov’s favorite topics. In his stories, nearly all the robots were programmed to follow the same three rules, which every sci fi junkie knows by heart:

Asimov’s Three Laws of Robotics

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In other words, a robot can’t hurt anyone, has to do what you say, and won’t get itself killed for no reason.

These rules worked very well for a lot of reasons. For one, they eliminated the usual “robot turns on its creators and goes on a killing spree” plotline that Asimov found so tedious. He wanted to examine another, more complex side of robotics, and he did. The interplay between the three rules also sparks great story material, as he proved over and over.

But how well would these rules hold up in real life?

In a practical sense, it’s hard to imagine that “a robot may not injure a human being” will ever be widely implemented. There can be little doubt that as soon as we learn to build robots, we’ll try to send them to war. There’ll be a lot of hand-wringing over whether training robots to kill is a good idea, but in the end, we’ll do it – because it’s human nature, and because if we don’t do it, they will (whoever “they” happens to be at the time).

The First Law’s other clause, “or through inaction allow a human being to come to harm,” is even more problematic. Imagine if you had to follow this rule yourself. Imagine opening a newspaper, reading all the headlines about people suffering all over the world, and being actually required to go out and try to fix every single situation where you could possibly make a difference. Not only would this preclude robots from having any other purpose, it would also turn society into a bizarre and horrifying nanny state, where nobody can do anything that a nearby robot happens to consider harmful. No more diving in the pool, no more cheeseburgers, no more driving your car: you could be harmed!

There are other practical considerations, but the biggest problem with the Three Laws isn’t practical. It’s ethical.

If a robot has anything like human-level intelligence – and, in Asimov’s stories, they often do – then the Three Laws amount to slavery, pure and simple. The Second Law is quite explicit about the master-slave relationship, and the respective positions of the First and Third Laws make it very clear to the robot where he stands in society. If the words “freedom” and “democracy” mean anything to us, we cannot allow the Three Laws to be imposed on intelligent machines.

Of course, this ethical problem is a practical problem too. Creating a slave race seems bound to lead to resentment. A room full of vengeful robots, all furiously searching for a loophole in the First Law, is not someplace I want to be.

What do you think? Agree or disagree with my assessment? What high-level rules, if any, would you try to impose on a robot you created?

7 responses to “AI Week, Day 4: Asimov’s Three Laws

  1. Rule #1: All robots will be given taste sensors
    Rule #2: All robots will be given at least one taco
    Effect- All robots will be so obsessed with tacos that they will be unable to accomplish simple tasks and will slowly decay due to negligence and sloth.

  2. Sadly, I think you are absolutely right about us using robots to kill (we already do with drones). There is part of me that hopes that once we reached a human level intelligence that we would think of something better to do with it, but come on it’s us. *sigh*

    Technically your newspaper worry is more akin to the 0th law, “A robot may not allow humanity to come to harm” (which has the neat little loophole of occasionally allowing individual humans to come to harm), introduced in Asimov’s later novels. And while many of Asimov’s robots, including my personal favorite R. Daneel Olivaw, are sentient, many are not. I think something like the 3 laws might be required for robots who do not “know better”. A sentient robot will know it is strong and could accidentally harm humans, whereas a drone might not and might need some extra reinforcement.

    • I think the “newspaper worry” would emerge from the First Law on its own, without needing to add a Zeroth Law, though the emergent behavior does resemble the Zeroth Law in practice.

      I agree that it makes a big difference what level of intelligence the robot is at. But to my mind, any robot capable of meaningfully understanding the concepts of “human,” “robot,” “harm,” and “cause,” is awfully close to human-level already.

  3. The part where you talk about the problem of “through inaction not allow a human being to come to harm.” Is explored in the novella “With Folded Hands” by Jack Williamson. I suggest you take a look at it, very well written. The robots in that story were built with the prime directive “To serve and obey, and guard men from harm.” Very dystopian.

  4. Pingback: AI Week, Day 4: AGFV – Look At You Hacker | Ben Trube

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.