How to Solve the Trolley Problem

The Trolley Problem is a favorite conundrum of armchair self-driving car ethicists.

In the original version of the problem, imagine a trolley were running down the rails and about to run over three people tied to the tracks. What if you could throw a switch that would send the trolley down a different track? But what if that track had one person tied down? Would you actually throw the switch to kill one person, even if it meant saving the other three people? Or would you let three people die through inaction?

The self-driving car version of this problem is simpler: what if a self-driving car has to choose between running over a pedestrian, or driving off a cliff and killing the passenger in the vehicle? Whose life is more valuable?

USA Today’s article, “Self-driving cars will decide who dies in a crash” does a reasonable job tackling this issue in-depth, from multiple angles. But the editors didn’t do the article any favors with the headline. It’s not actually self-driving cars that will decide who dies, it’s the humans that design them.

Here’s Sebastian Thrun, my boss and the former head of the Google Self-Driving Car Project, explaining why this isn’t a useful question:

I’ve heard another automotive executive call it “An impossible problem. You can’t make that decision, so how can you expect a car to solve it?”

To be honest, I think of it as an unhelpful problem because we don’t have enough data to know at any given point, with what amount of certainty is the car going to kill anybody. Fatal accidents in self-driving cars haven’t happened yet in any meaningful numbers, so the necessary data doesn’t exist to even work on the problem.

But, I think I’ve come to a conclusion, at least about the hypothetical ethical dilemma:

The car should minimize the number of people who die, by following utilitarian ethics.

This raises some questions about how to value the lives of children versus adults, but I assume some government statistician in the bowels of the Department of Labor has worked that out.

So why should self-driving cars be utilitarian? Because people want them to be.

From USA Today:

Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car “in which they and their family member would be sacrificed for the greater good.”

I’ve seen this in a few places now. The general public thinks cars should be designed to minimize fatalities, even if that means sacrificing the passengers. But they don’t want to ride in a car that would sacrifice passengers.

If you believe, as I do, and as Sebastian does, that these scenarios are vanishingly small, then who cares? Give the public what they want. In the exceedingly unlikely scenario that a car has to make this choice, choose the lowest number of fatalities.

And if people don’t want to ride in those cars themselves, they can choose not to. They can drive themselves, but of course that is pretty dangerous, too.

I’ll choose to ride in the self-driving cars.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s