What books should a robot read to learn morality?

What books should a robot read to learn morality?

Sometimes good is harder than best.

If it’s just a question of math, a computer can come up with the best answer more reliably than a human.

If, however, it’s a question of good as in good and evil, most people would say that a robot can’t make that decision.

They are going to have to do that in the future, though.

As robots (and by robots, I mean anything that does work that humans or animals used to do…that’s the origin of the word in the play R.U.R.) become more and more part of our lives, they will encounter more of the situations we do. They will be less in controlled, limited circumstances.

Let’s take the most obvious example: self-driving cars.

I consider it inappropriate to call them “driverless” cars. That’s inaccurate, and unnecessarily scary. There is a driver: not a human, carbon-based driver, but a silicon-based one. 😉

Does the robot need to figure out if it can make the green light? No problem. Eventually, we won’t even need traffic lights, when the cars are communicating with each other and recognizing that there are pedestrians who need to cross.

It can stay in the lane and avoid obstacles.

However…

Let’s suppose that the car loses its brakes…on a mountain road next to a cliff.

The light up ahead is red, and the crosswalk is full of people.

The car does a quick calculation. If it goes straight ahead, it will hit and kill at least five people.

It could also swerve off the cliff, killing just one person, its passenger.

I think if you ask most people what they would do if they were driving the car, they’d say drive off the cliff.

Would you get in a car that would make that same decision?

My guess is that most people would say no.

That’s part of the problem.

We don’t want our technology to be just as good as we are, we want it to be perfect. If a “phone dialer” dialed the wrong number once in a thousand times, we’d consider it unreliable, even though humans do it more often.

If you were just programming the car, you could program it to drive off the cliff.

Let’s complicate it.

Suppose there is a ten percent chance the car can make it so the passenger (we’ll start saying “you”) will survive and so will the people in the crosswalk. There’s a 90 percent chance if it tries it that the five people will die, and the passenger live.

What if it was a twenty-five percent chance?

Fifty-fifty?

Seventy-five percent chance it comes out fine?

Ninety percent chance everyone makes it…and ten percent chance they all die and you survive?

It’s just math, right?

Let’s back up and make an inevitable choice.

This time, there are five people in half of the crosswalk, and one person in the other half…the car can pick a lane, and kill one person or five.

We could program the car that killing fewer people is better than killing more people, right?

What if the five people are serial killers…and the one person is a four-year old child?

Does it matter if it’s a twenty-four year old instead of a four-year old? Five twelve-year olds versus a ninety-four year old?

There are too many variables to come up with just math.

Nowadays, the most advanced types of AI (Artificial Intelligence) aren’t programmed, anyway.

They use “machine learning”…in a sense, they learn by example.

AIs have figured out the rules of scissors/paper/rock just from watching videos.

There is an AI system at use in many public transit systems (we have had it in the San Francisco Bay Area)…at least, that used to be the case. It would watch videos of the station, and figure out normal patterns (on its own). When it saw something strange (such as someone jumping a turnstile, or being on “the wrong side of a fence”, both real examples), it would alert a human for an evaluation.

This

The Guardian article by English professor John Mullan

considers the idea of using fictional characters to teach robots morals, as is being tried. The above article references this

Georgia Tech article

I’m going to provide a brief excerpt from the Georgia Tech article (which is from February of 2016):

“Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” – to be unveiled at the AAAI-16 Conference in Phoenix, Ariz. (Feb. 12 – 17). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.”

When I read this this morning and flipped it into the

free ILMK magazine at Flipboard

it really got me thinking.

What would I have a robot read to learn morality? More interestingly to me, what would you have them read?

I need to set a few ground rules:

  • Only fiction. Nothing that is non-fiction philosophy, no religious non-fiction (including the books which “define” the major religions)
  • The works must have been originally published for humans to read, not created to teach robots morals
  • You can not instruct the robot as to what is good or bad in the book…or even who the hero is. We will accept that the robot has an excellent understanding of English (or whatever language you are having it read), including subtleties like humor. Think of it as an intelligent human being, but one that is naive about morality

While I’d like a robot to think like Doc Savage (one of my fictional heroes), those books have a lot of bad behavior in them. Doc also has a self-sacrificing streak I don’t think I’d want to see in my robot…and what if the robot modeled itself after the relatively bloodthirsty Monk Mayfair? Monk “wins” as much as Doc does, although Doc is more respected by others. I’m guessing that’s part of how a robot would learn, by judging the reactions of other characters to determine what is a good thing to do.

To Kill a Mockingbird (at AmazonSmile: benefit a non-profit of your choice by shopping*)

is a possibility…assuming that the robot would look to Atticus Finch for guidance. Atticus isn’t perfect, but I’m not looking for perfect. Atticus also isn’t the main character, and it would be much trickier if the robot also read

Go Set a Watchman (at AmazonSmile: benefit a non-profit of your choice by shopping*)

Hm…this is harder than I would have first thought.

Sherlock Holmes? Maybe if it chose Watson as the model, but not Holmes, certainly.

How about Lassie Come Home by Eric Knight? Maybe…

Clearly, I have to think about this more.

What do you think? What would you want a robot to read to learn morals? Is that the right way for a robot to learn what’s right and wrong to mold its behavior? Are Asimov’s 3 Laws of Robotics good enough…even though they were imperfect in Asimov’s own works? Would you accept imperfect morality in a robot, that it might rarely make a bad choice, one that humans would see as more evil than good? Feel free to tell me and my readers what you think by commenting on this post.

Join thousands of readers and try the free ILMK magazine at Flipboard!

All aboard The Measured Circle’s Geek Time Trip at The History Project!

* When you shop at AmazonSmile, half a percent of your purchase price on eligible items goes to a non-profit you choose. It will feel just like shopping at Amazon: you’ll be using your same account. The one thing for you that is different is that you pick a non-profit the first time you go (which you can change you want)…and the good feeling you’ll get. Shop ’til you help! 🙂 

Advertisements

7 Responses to “What books should a robot read to learn morality?”

  1. Roger Knights Says:

    How about Everything I needed to know I learned in kindergarten? (I’m not sure if that is the exact title.)

    • Bufo Calvin Says:

      Thanks for writing, Roger!

      That is the right title of the Robert Fulghum book (I’m not linking because text-to-speech access is blocked). The title essay/poem might be good, but I don’t think this meets the “fiction” criterion. It’s really a philosophical essay, without a fictional character involved, from what I know. I don’t know the rest of the book to judge those…

  2. Robert Poss Says:

    A Little Princess by Frances Hodgson Burnett

    Sara is a great example of moral character and human thoughtfulness.
    Every developing mind should meet her.

  3. Bufo Calvin Says:

    John Aga‏ @jbaga01
    Replying to @bufocalvin
    I would recommend “A Wrinkle In Time”, “The Phantom Tollbooth” and the “Grinch Who Stole Christmas”.

  4. What do Wonder Woman, King Kong, and Star Wars have in common? Us! | The Measured Circle Says:

    […] What books should a robot read to learn morality? […]

  5. Lady Galaxy Says:

    It is theoretically possible to program every novel ever written in every language known to humankind into a robot along with every dictionary entry ever written for “morality”, but I don’t think a robot is capable of attaining morality. Morality is a uniquely human characteristic.

    • Bufo Calvin Says:

      Thanks for writing, Lady!

      That is a very complicated question, certainly. I’ve had animals that appeared to believe they had done things wrong (although it’s hard to tell). We have a dog now who is a very “good” dog. One interesting thing is that the dog will pick up a sock (which is not okay) and will walk around with it crying. The dog will give it without a problem if you ask for it…even just spit it out when you are seen. That seems like a sense of guilt to us.

      I don’t see any barrier personally to an AI having “motivations” based on “good” and “evil”, but as I say, it’s complicated. Whether they actually develop what you would consider morality, I think they will have to appear to have it to function as closely with us as is likely. Just like the Turing test, I’m not sure that simulated, indistinguishable morality is going to be functionally different than “actual” morality.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: