How to Solve the Trolley Problem

The Trolley Problem is a favorite conundrum of armchair self-driving car ethicists.

In the original version of the problem, imagine a trolley were running down the rails and about to run over three people tied to the tracks. What if you could throw a switch that would send the trolley down a different track? But what if that track had one person tied down? Would you actually throw the switch to kill one person, even if it meant saving the other three people? Or would you let three people die through inaction?

The self-driving car version of this problem is simpler: what if a self-driving car has to choose between running over a pedestrian, or driving off a cliff and killing the passenger in the vehicle? Whose life is more valuable?

USA Today’s article, “Self-driving cars will decide who dies in a crash” does a reasonable job tackling this issue in-depth, from multiple angles. But the editors didn’t do the article any favors with the headline. It’s not actually self-driving cars that will decide who dies, it’s the humans that design them.

Here’s Sebastian Thrun, my boss and the former head of the Google Self-Driving Car Project, explaining why this isn’t a useful question:

I’ve heard another automotive executive call it “An impossible problem. You can’t make that decision, so how can you expect a car to solve it?”

To be honest, I think of it as an unhelpful problem because we don’t have enough data to know at any given point, with what amount of certainty is the car going to kill anybody. Fatal accidents in self-driving cars haven’t happened yet in any meaningful numbers, so the necessary data doesn’t exist to even work on the problem.

But, I think I’ve come to a conclusion, at least about the hypothetical ethical dilemma:

The car should minimize the number of people who die, by following utilitarian ethics.

This raises some questions about how to value the lives of children versus adults, but I assume some government statistician in the bowels of the Department of Labor has worked that out.

So why should self-driving cars be utilitarian? Because people want them to be.

From USA Today:

Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car “in which they and their family member would be sacrificed for the greater good.”

I’ve seen this in a few places now. The general public thinks cars should be designed to minimize fatalities, even if that means sacrificing the passengers. But they don’t want to ride in a car that would sacrifice passengers.

If you believe, as I do, and as Sebastian does, that these scenarios are vanishingly small, then who cares? Give the public what they want. In the exceedingly unlikely scenario that a car has to make this choice, choose the lowest number of fatalities.

And if people don’t want to ride in those cars themselves, they can choose not to. They can drive themselves, but of course that is pretty dangerous, too.

I’ll choose to ride in the self-driving cars.

Literature Review: Apple and Baidu and Deep Neural Networks for Point Clouds

Recently, Apple made what they must have known would be a big splash by silently publishing a research paper with results from a deep neural network that two of their researchers built.

The network and the paper in question were clearly designed for autonomous driving, which Apple has been working on, more or less in secret, for years.

The network in question — VoxelNet — has been trained to perform object detection on lidar point clouds. This isn’t a huge leap from object detection on images, which has been a topic of deep learning research for several years, but it is a new frontier in deep learning for autonomous vehicles. Kudos to Apple for publishing their results.

VoxelNet (by Apple), draws heavily on two previous efforts at applying deep learning to lidar point clouds, both by Baidu-affiliated researchers. Since the three papers kind of work as a trio, I did a quick scan of them together.

3D Fully Convolutional Network for Vehicle Detection in Point Cloud

Bo Li (Baidu)

Bo Li basically applies the DenseBox fully convolutional network (FCN) architecture to a three-dimensional point cloud.

To do this, Li:

  • Divides the point cloud into voxels. So instead of running 2D pixels through a network, we’re running 3D voxels.
  • Trains an FCN to identify features in the voxel-ized point cloud.
  • Upsamples the FCN to produce two output tensors: an objectness tensor, and a bounding box tensor.
  • The bounding box tensor is probably more interesting for perception purposes. It draws a bounding box around cars on the road.
  • Q.E.D.

Multi-View 3D Object Detection Network for Autonomous Driving

Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, Tian Xia (Tsinghua and Baidu)

A team of Tsinghua and Baidu researchers developed Multi-View 3D (MV3D) networks, which combine lidar and camera images in a complex neural network pipeline.

In contrast to Li’s solo work, which constructs voxels out of the lidar point cloud, MV3D simply takes two separate 2D views of the point cloud: one from the front and one from the top (birds’ eye). MV3D also uses the 2D camera image associated with each lidar scan.

That provides three separate 2D images (lidar front view, lidar top view, camera front view).

MV3D uses each view to create a bounding box in two-dimensions. Birds-eye view lidar created a bounding box parallel to the ground, whereas front-view lidar and camera view each create a 2D bounding box perpendicular to the ground. Combining these 2D bounding boxes creates a 3D bounding box to draw around the vehicle.

At the end of the network, MV3D employs something called “deep fusion” to combine output from each of the three neural network pipelines (one associated with each view). I’ll be honest — I don’t really understand how “deep fusion” works, so leave me a note in the comments if you can follow what they’re doing.

The results are a classification of the object and a bounding box around it.

VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection

Yin Zhou, Oncel Tuzel (Apple)

That brings us to VoxelNet, from Apple, which got so much press recently.

VoxelNet has three components, in order:

  • Feature Learning Network
  • Convolutional Middle Layers
  • Region Proposal Network

The Feature Learning Network seems to be the main “contribution to knowledge”, as the scholars say.

It seems that what this network does is start with a semi-random sample of points from within “interesting” (my word, not theirs) voxels. This sample of points gets run through a fully-connected (not fully-convolutional) network. This network learns point-wise features which are relevant to the voxel from which the points came.

The network, in fact, uses these point-wise features to develop voxel-wise features that describe each of the “interesting” voxels. I’m oversimplifying wildly, but think of this as learning features that describe each voxel and are relevant to classifying the part of the vehicle that is in that voxel. So a voxel might have features like “black”, “rubber”, and “treads”, and so you could guess that the voxel captures part of a tire. Of course, the real features won’t necessarily be intelligible by humans, but that’s the idea.

These voxel-wise features can then get pumped through the Convolutional Middle Layers and finally through the Region Proposal Network and, voila, out come bounding boxes and classifications.


One of the most impressive parts of this line of research is just how new it is. The two Baidu papers were both first published online a year ago, and only made it into conferences in the last six months. The Apple paper only just appeared online in the last couple of weeks.

It’s an exciting time to be building deep neural networks for autonomous vehicles.

Photorealism of Microsoft AirSim

Over the last year, a number of companies (including Udacity) have released self-driving car simulators powered by gaming engines.

The latest entrant is Microsoft, which has updated their open-source AirSim flight program to also support self-driving cars.

AirSim looks awesome. The big advantages of building off of a gaming engine (AirSim uses Unreal Engine, whereas the Udacity simulator uses Unity) include fully baked APIs, powerful physics engines, and incredibly realistic design and graphics.

That last item what will ultimately make or brake AirSim, or any other simulation engine.

The holy grail of autonomous vehicle simulation is the ability to train machine learning models in the simulator, and then port them to the real world. Once a simulator breaks that barrier, we should see incredibly fast improvements in our ability to build autonomous driving systems, as it’s exponentially faster to drive “simulated” miles compared “real” miles.

As photorealistic as AirSim is, it doesn’t yet look to me like it’s realistic enough to reliably move models between AirSims photorealistic environment and the actual, real environment.

That said, I doubt it’s possible to determine model portability with much confidence simply by eyeballing YouTube videos of the simulator, which is all I’ve done so far.

I look forward to people trying out AirSim models in the real world and seeing how they do.

The “MiniFlow” Lesson

Exploring how to build a Self-Driving Car, step-by-step with Udacity!

Editor’s note: David Silver (Program Lead for Udacity’s Self-Driving Car Engineer Nanodegree program), continues his mission to write a new post for each of the 67 lessons currently in the program. We check in with him today as he introduces us to Lesson 5!

The 5th lesson of the Udacity Self-Driving Car Engineer Nanodegree Program is “MiniFlow.” Over the course of this lesson, students build their own neural network library, which we call MiniFlow.

The lesson starts with a fairly basic, feedforward neural network, with just a few layers. Students learn to build the connections between the artificial neurons and implement forward propagation to move calculations through the network.

A feedforward network.

The real mind-bend comes in the “Linear Transform” concept, where we go from working with individual neurons to working with layers of neurons. Working with layers allows us to dramatically accelerate the calculations of the networks, because we can use matrix operations and their associated optimizations to represent the layers. Sometimes this is called vectorization, and it’s a key to why deep learning has become so successful.

Once students implement layers in MiniFlow, they learn about a particular activation function: the sigmoid function. Activation functions define the extent to which each neuron is “on” or “off”. Sophisticated activation functions, like the sigmoid function, don’t have to be all the way “on” or “off”. They can hold a value somewhere along the activation function, between 0 and 1.

The sigmoid function.

The next step is to train the network to better classify our data. For example, if we want the network to recognize handwriting, we need to adjust the weight associated with each neuron in order to achieve the correct classification. Students implement an optimization technique called gradient descent to determine how to adjust the weights of the network.

Gradient descent, or finding the lowest point on the curve.

Finally, students implement backpropagation to relay those weight adjustments backwards through the networks, from finish to start. If we do this thousands of times, hopefully we’ll wind up with a trained, accurate network.

And once students have finished this lesson, they have their own Python library they can use to build as many neural networks as they want!

If all of that sounds interesting to you, maybe you should apply to join the Udacity Self-Driving Car Engineer Nanodegree Program and learn to become a Self-Driving Car Engineer!

Roundup of Autonomous Vehicle News

I was on vacation last week and it was delightful. But despite valiant struggles, I was not able to fully stay on top of the latest news in the autonomous vehicle world.

Here’s what I missed:

Everything we learned from the Tesla Semi and Roadster event

Zac Estrada
The Verge

The Tesla Semi drew excitement from the crowd at the Hawthorne, California facility, as people eagerly waited for Musk to emerge from the big truck. But the surprise showing of the second-generation Tesla Roadster caused explosive cheers from the second its headlights switched on.


GM Challenges Tesla With Promise of Profitable Electric Cars

Paul Lienert
Reuters

Barra said GM aims to be selling 1 million electric vehicles a year by 2026, many of them in China, which has set strict production quotas on such vehicles. On Monday, GM’s China chief said the automaker and its joint-venture partners will be able to meet the country’s 2019 electric vehicle requirements without purchasing credits from other companies.


Mercedes-Benz opens tech hub in Tel Aviv to secure lead in connected cars

Shoshanna Solomon
The Times of Israel

The Mercedes-Benz team in Israel will both develop in-house technologies and scout the ecosystem for products that could be integrated into their pipeline, either through acquisitions, long-term co-operations with startups, or investments.


Jaguar Land Rover self-driving cars hit real roads for first time

Andrew Krok
CNET

Jaguar Land Rover announced Friday that it will test its self-driving vehicles on public roads in the United Kingdom. Its vehicles will amble around Coventry as its engineers assess the systems and prepare this technology for an eventual public debut — which is still years away, it should be noted.

Training Self-Driving Car Engineers in India

Udacity and Infosys partner to teach autonomous technology!

Udacity and Infosys just announced a partnership to train hundreds of Infosys’ top software engineers in autonomous vehicle development.

Quoting Infosys President Ravi Kumar:

Udacity and Infosys are uniting the elements of education and transformative technology in this one-of-a-kind program. Trainees, with the first 100 selected through a global hackathon in late November, will immerse themselves in autonomous technology courses that require hands-on training to simulate real-life scenarios. By the end of 2018, Infosys will have trained 500 employees on the spectrum of technologies that go into building self-driving vehicles, and in doing so will help to evolve the future of transportation for drivers, commuters and even mass transit systems.

And Udacity CEO Vishal Makhijani:

This program will be part of Udacity Connect, which is Udacity’s in-person, blended learning program. Infosys engineers from around the world will participate in Udacity’s online Self-Driving Car Engineer Nanodegree program, and combine one term of online studies with two terms of being physically located together at the Infosys Mysore training facility, where the program will be facilitated by an in-person Udacity session lead.

Two aspects of this partnership are particularly exciting for me. One is simply working with a top technology company like Infosys. When we started building the Nanodegree program, our objective was to “become the industry standard for training self-driving car engineers.” This partnership moves us significantly closer to that objective. We are grateful and excited for the opportunity, and thrilled for the participating engineers.

The other exciting aspect of this partnership is that it will happen in India. The Infosys engineers will fly in from all over the world, but there is something special about conducting the program in Mysore.

For many years autonomous vehicle development has happened in just a few places: Detroit, Pittsburgh, southern Germany. Recently, we’ve seen autonomous vehicle development expand to Silicon Valley, Japan, Israel, various parts of Europe, Singapore, and beyond. Training autonomous vehicle engineers in India expands the opportunities for students worldwide.

7% of students in the Udacity Self-Driving Car Engineer Nanodegree program are from India. The Infosys partnership is an important next step in building a robust pipeline of job opportunities for our students on the subcontinent.

Dominik Nuss at Mercedes-Benz

One of the world-class experts in our Self-Driving Car Engineer Nanodegree program!

Me and Andrei Vatavu and Dominik Nuss

One of the delights of teaching at Udacity is the opportunity to work with world-class experts who are excited about sharing their knowledge with our students.

We have the great fortune of working with Mercedes-Benz Research and Development North America (MBRDNA) to build the Self-Driving Car Engineer Nanodegree Program. In particular, we get to work with Dominik Nuss, principal engineer on their sensor fusion team.

In these two videos, Dominik explains how unscented Kalman filters fuse together data from multiple sensors across time:

These are just a small part of a much larger unscented Kalman filter lesson that Dominik teaches. This is an advanced, complex topic I haven’t seen covered nearly as well anywhere else.

MBRDNA has just published a terrific profile of Dominik, along with a nifty video of him operating one of the Mercedes-Benz autonomous vehicles.

Read the whole thing and learn what it’s like to work on one of the top teams in the industry. Then, enroll in our program (if you haven’t already!), and start building your OWN future in this amazing field!

A Sunny Laboratory of Democracy

There’s a reason every company is testing self-driving cars in Arizona. It’s sunny. It’s warm. It’s flat (at least around Phoenix).

And according to The New York Times, Arizona Governor Doug Ducey is excited about making Arizona a leader in autonomous vehicle testing.

While Uber and Waymo were working through regulatory barriers testing in California, Ducey recruited them to Arizona with an “open for business” attitude.

“We responded by saying we weren’t going to hassle them,” Mr. Ducey said of Uber. “I’d be remiss if I didn’t thank my partner in growing the Arizona economy, Jerry Brown”, the Democratic governor of California.

The article closes with several anecdotes of human drivers crashing into self-driving cars, because that’s what human drivers do, and seizes on those anecdotes to suggest Arizona isn’t ready for self-driving cars.

I’m not sold.

Louis Brandeis once postulated that the beauty of American federalism is that each state is its own little laboratory of democracy, experimenting on its own, without risk to the rest of the country.

God bless Arizona for that.

Waymo Goes Driverless

With self-driving cars already being tested in cities across the United States and in several parts of the world, there have been three big questions about how quickly self-driving cars would expand:

  1. How quickly will the geofences around the (usually urban) test areas expand?
  2. When will companies open their services to the general public?
  3. How soon will companies pull the test driver from the vehicle?

Waymo just went ahead and answered #3. In a blog post and accompanying video (above), Waymo just announced that they have pulled the driver out of the seat on a subset of their test vehicles in the Phoenix, Arizona, metro area.

This looks like the latest step in a campaign by Waymo to both step forward in their self-driving efforts, and reassure the public that everything will be okay. And it looks like everything will be okay.

To that end, Waymo has invited reporters to their previously top-secret Castle test facility, and published a 43-page online safety brochure, alongside a slew of Medium posts.

A few thoughts of my own to accompany the Waymo announcement:

  1. This is awesome, and it has the potential to be huge if Waymo continues to roll this out to the rest of their test fleet in a timely manner.
  2. Waymo doesn’t say it, but I have to believe that, for now, they have test engineers near the driverless vehicles. They might be in trailing vehicles or at some sort of central command point to which the driverless vehicles are geofenced. I wouldn’t want an accident to happen (even an accident that’s not Waymo’s fault) and have civilian passengers be the first ones to talk with police and the press.
  3. As I understand it, these rides are carrying civilian, non-Waymo employees, but they’re also pre-screened for the program. The next step for Waymo will be what Uber has already done in Pittsburgh: open the program up to anybody who downloads the app.

It’s an exciting time for self-driving cars 😀

The “Introduction to Neural Networks” Lesson

Editor’s note: On November 1st of this year, David Silver (Program Lead for Udacity’s Self-Driving Car Engineer Nanodegree program) made a pledge to write a new post for each of the 67 lessons currently in the program. We check in with him today as he introduces us to Lesson 4!

The 4th lesson of the Udacity Self-Driving Car Engineer Nanodegree Program introduces students to neural networks, a powerful machine learning tool.

This is a fast lesson that covers the basic mechanics of machine learning and how neural networks operate. We save a lot of the details for later lessons.

My colleague Luis Serrano starts with a quick overview of how regression and gradient descent work. These are foundational machine learning concepts that almost any machine learning tool builds from.

Luis is great at this stuff. I love Mt. Errorest.

Moving on from these lessons, Luis goes deeper into the distinction between linear and logistic regression and then explores how these concepts can reveal the principles behind a basic neural network.

See the slash between the red and green colors there? If you ever meet Luis in person, ask him to sing you the forward-slash-backward-slash alphabet song. It’s amazing.

From here we introduce perceptrons, which historically were the precursor to the “artificial neurons” that make up a neural network.

As we string together lots of these perceptrons, or “artificial neurons”, my colleague Mat Leonard shows that we can take advantage of a process called backpropagation, that helps train the network to perform a task.

And that’s basically what a neural network is: a machine learning tool built from layers of artificial neurons, which takes an input and produces an output, trained via backpropagation.

This lesson has 23 concepts (pages), so there’s a lot more to it than the 3 videos I posted here. If some of this looks confusing, don’t worry! There’s a lot more detail in the lesson, as well as lots of quizzes to help make sure you get it.

If you find neural networks interesting in their own right, perhaps you should sign up for Udacity’s Deep Learning Nanodegree Foundation Program. And if you find them interesting for how they can help us build a self-driving car, then of course you should apply to join the Udacity Self-Driving Car Nanodegree Program!