The Surprising Ethics of Humans and Self-Driving Cars

The surprising finding is that people are much more willing to ride in a self-driving car that might kill them to save several pedestrians than in a car that would save them but kill pedestrians. Asian respondents had significantly different preferences from US and Europe.



Self-Driving Car With self-driving cars being tested in Pittsburgh, Boston, San Francisco, and other places by Google, Tesla, Uber and others, the questions of Ethics and Machine Learning, which used to be confined to theoretical "trolley experiments" suddenly become more plausible.

The latest KDnuggets Poll asked the readers to imagine one such extreme situation. (Software engineers need to test for rare and extreme cases, so considering such rare cases is routinely done in other areas).

Imagine a self-driving car with one passenger, driving fast, a mountain on one side, cliff on the other side. Suddenly there are 5 pedestrians in front of the car. What should the self-driving car do?

The answers split 3-ways. About 34% said the car should swerve, killing the passenger (and itself) to save more human lives (for simplicity, let's call this choice "altruistic"). About 30% said the car should drive straight, saving the passenger but killing several pedestrians (let's call this choice "selfish"). Here are the results:
  • Swerve, and go off the cliff (saving 5, but killing the passenger) 33.8% ("altruistic")
  • Drive straight (probably killing most of 5, but saving the passenger), 29.7% ("selfish")
  • Don't know, 36.5%
667 readers took part in the poll, with regional distribution:
  • 41% US/Canada
  • 31% Europe
  • 19% Asia
  • 8.4% Other (Latin America, Africa/MidEast, AU/NZ)
The regional breakdown for Question 1 is shown below, with similar patterns for all regions, except Asia, where significantly more people (52%) chose don't know, compared to about 33% in US and Europe.

Self Driving Car Q1 By Region
Fig. 1: What should the self-driving car do, by region


The follow-up question was
Q2. Would you ride in a self-driving car that is programmed to kill its passenger in some cases?

Here the choice was much clearer, with majority not willing to ride in such car
  • No, 62%
  • Yes, 24%
  • Not sure, 14%
The regional breakdown below shows that Asia readers were again different from US and Europe, and even less willing to ride in such a car.

Self Driving Car Q2 By Region
Fig. 2: Answers by region to "Would you ride in a self-driving car that is programmed to kill its passenger in some cases?"

Bar height corresponds to the number of respondents in the region, bar color to region, bar length to % of respondents in that region choosing that answer.

Finally, the big surprise comes when we look at breakdown of answers for Q1 vs Q2. One can expect that people who said that the car should be more "altruistic" and kill its passenger to save several pedestrians will be less willing to ride in such a car that might be programmed to kill them in such cases.

However, the KDnuggets readers made the opposite and morally consistent choice - those who said the car should be "altruistic" - willing to kill the passenger to save several others were themselves altruistic and MORE willing to ride in such a car.

Those who said the car should make a "selfish" choice - drive straight, saving the passenger and killing others, were even less willing than others to ride in it.

Self Driving Car Q2 By Region
Fig. 3: Q1: What should the self-driving car do vs Q2: Are you willing to ride in "altruistic" car?


This poll generated many comments - see selected ones below and you can also comment at the bottom of the post.

Selected Comments

Gregory Piatetsky, Editor, What about the car itself?
At some point the car may become intelligent enough, that it will want to avoid its own damage or death.

Imagine a different experiment (set it up as a mental exercise) where the choice facing the un-manned car (no passengers) is between a small injury to one pedestrian (eg a broken toe) vs a total car destruction? Asimov wrote about 3 laws of robotics, but the boundary cases are very tricky.

Chipmonkey, Death Metrics
I wonder if there isn't a better metric that the car should use, such as lowering the risk of death for the most endangered person in the scenario. Being hit with a car probably isn't 100% fatal, but flying off a cliff is probably worse.

So minimize(max(ProbabilityOfDeath)) is an option.
Or minimize(mean(ProbabilityOfDeath))...

Then you run risks of people fighting for minimize(AgeWeightedProbabilityOfDeath) if you want to save younger people first or some such madness (processing power is going to get where these estimates will be pretty good... eventually).

Also, if the car was absolutely certain that someone would die, I'd think a specific default behavior, like stopping as quickly as possible, rather than "deciding" which person to kill may be more culturally acceptable. The action may BE decisive, and it may even lead to worse accidents than an alternative (imagine, say, hitting a pedestrian vs. stopping on train tracks), but it's definitive without having to make choices based on ethics (which some people will see as a negative).

David, JUST BECAUSE YOU CAN DO IT DOESN'T MEAN YOU SHOULD
This is a very realistic and pertinent question. Assuming you can get perfect prediction of options, which you will not, the ethics of this decision is complex and far overshadows the technology.

As a driver the choice is drivers, and potentially dependent upon the life of the rider, say the rider's child. As a passenger, many expect a human driver to save them, and not to martyr the passenger to save others unless the driver knows that is the passenger's wish.

Let's make the question even more challenging. What if the 5 hypothetical pedestrians are carjackers, intent on harming the driver?

I love machine learning and being a data scientist. We really need to move slowly on this issue. I do ponder the Net Payback Period of creating a driverless car? Love the intelligent warning systems; not sold on replacing drivers.

FS, Re-framing the question
Maybe the questions could be re-framed to put a different perspective on the "bigger picture"?

1) "Would you prefer a society where the risk of you and/or anyone else dying in traffic is reduced by an AI that, in order to achieve those results, sometimes would make decisions causing it's own passengers' death?"

2) "In that society would you want to use a car?"

Luciano SB, The Machine can Learn
The car must be allowed to learn. As such algorithms can run in the cloud such as with Google's Car, the error it made that in such a situation caused the loss of one or many lives must be stored in the cloud and analysed to minimize such situations in the future by other cars governed by the same rules.

Walter Krottendorfer, self driving car
You should not expect that by AI the optimal world will rise. Consider what a human driver will do; which decision will he choose ? Dom raeally ant to ba a car which will kill the driver ? What about bug SUV and pedestrians ? The will also protect the driver.

Philip R. Jarnhus, How realistic is this question? I often see this question asked regarding the ethics. How realistic is it really that a self-driving car reaches a speed under these conditions, where it cannot stop in time?

If we have a cliff on one side of what sounds to be a one-lane road with no sidewalks and no possibility to veer off the road on the other side, would a self-driving car really be allowed to go so fast, that it could not stop? I often attribute these situations to human folly.

Prof Vadlamani Ravi, Machine learning and ethics
I would suggest that a self driving car should undergo rigorous testing in all scenarios including the cases mentioned by you where on-the-fly decision making is extremenly important. It should be trained for achieving minimal loss to all parties, just like a human driver thinks.

Related: