Tesla Autopilot Lowers Insurance Premiums

A British automotive insurer has offered to reduce insurance premiums 5% for drivers who turn on Autopilot. The insurer, Direct Line, says it doesn’t yet actually know with certainty whether Autopilot makes cars safer.

Direct Line said it was too early to say whether the use of the autopilot system produced a safety record that justified lower premiums. It said it was charging less to encourage use of the system and aid research.

But I have to imagine Direct Line believes Autopilot will make cars safer, even if it doesn’t know that for sure. After all, they’re not offering 5% off to customers who drive blindfolded, on the theory that they need more research on that topic.

Although Direct Line is a UK company, the financial angle of autonomous systems ties in closely tactics that the US government has used in the past. Famously, the federal government did not directly mandate a drinking age of 21, but rather tied federal highway funds to whether states raised their drinking age to 21.

I can imagine a future scenario in which the government doesn’t mandate the use of autonomous vehicles, but rather a combination of governmental and insurance incentives push drivers gently or not-so-gently toward taking their hands off the wheel.

Say Hello in Detroit

Next month I’ll be checking off a common bucket list item by visiting Michigan in January. Most people go for the weather, but I in fact am going for the North American International Auto Show.

I tease, of course, but I truly am excited to be heading back to Motor City, and especially for America’s largest auto show.

On Wednesday, January 17, I’ll be speaking on a panel at Automobili-D, the tech section of the show, and I’ll be in town with some Udacity colleagues through the weekend.

Drop me a note at david.silver@udacity.com and I’d love to say hello. It’s always amazing to head to the center of the automotive world. In many ways it reminds me of how cool it was to visit Silicon Valley when I was a software engineer in Virginia, living outside the center of the software world.

We’ll be holding at least one and maybe a few events for Udacity students, potential students, and partners, and I’ll be announcing those here as we nail them down.

See you in Detroit!

Is Boston the Next Pittsburgh?

That’s gotta be a rough headline for Patriots fansĀ šŸ˜›

For years, autonomous vehicle development in the US has happened primarily in three locations: Detroit, Silicon Valley, and Pittsburgh.

Detroit because it’s the center of the US automotive industry, Silicon Valley because it’s the center of the US technology industry, and Pittsburgh because…why?

Basically because Pittsburgh is home to the vaunted Carnegie Mellon University Robotics Institute, which counts among its alumni such robotic luminaries as Red Whittaker, Sebastian Thrun, and Chris Urmson. Researchers from the Robotics Institute were famously lured away en masse by Uber, but the academic center appears to have recovered, and the net result has been to make Pittsburgh the home of not only Uber ATG, but also other autonomous vehicle companies like Argo AI and Aptiv.

Here’s a quick readout of the job counts for ā€œautonomous vehicleā€ on Indeed.com right now:

Mountain View (Silicon Valley): 446
Detroit: 226
Pittsburgh: 86
Boston: 86

So what’s up with Boston?

Partly nuTonomy, which Aptiv (formerly Delphi) purchased for a rumored $450 MM. And of course MIT and their own vaunted Computer Science and Artificial Intelligence Lab (CSAIL).

But further inspection shows Boston potentially has a more robust autonomous vehicle industry than Pittsburgh. Indeed.com shows essentially all Pittsburgh’s autonomous vehicle jobs coming from three companies: Aptiv, Argo, and Uber.

On the other hand, Boston’s autonomous vehicle jobs come from: Square Robot, Liberty Mutual, nuTonomy, Draper, MathWorks, Aurora, Optimus Ride, Lux Research, and the list goes on. That’s a diversified and presumably robust jobs base. Plus, Aptiv just announced a new Boston-based autonomous technology center.

Keep an eye on Beantown.

How Self-Driving Cars Work

Earlier this fall I spoke about how self-driving cars work at TEDxWilmington’s Transportation Salon, which was a lot of fun.

The frame for my talk was a collection of projects students have done as part of the Udacity Self-Driving Car Engineer Nanodegree Program.

So, how do self-driving cars work?

Glad you asked!

Self-driving cars have five core components:

  1. Computer Vision
  2. Sensor Fusion
  3. Localization
  4. Path Planning
  5. Control

Computer vision is how we use cameras to see the road. Humans demonstrate the power of vision by handling a car with basically just two eyes and a brain. For a self-driving car, we can use camera images to find lane lines, or track other vehicles on the road.

Sensor fusion is how we integrate data from other sensors, like radar and lasers—together with camera data—to build a comprehensive understanding of the vehicle’s environment. As good as cameras are, there are certain measurementsā€Šā€”ā€Šlike distance or velocityā€Šā€”ā€Šat which other sensors excel, and other sensors can work better in adverse weather, too. By combining all of our sensor data, we get a richer understanding of the world.

Localization is how we figure out where we are in the world, which is the next step after we understand what the world looks like. We all have cellphones with GPS, so it might seem like we know where we are all the time already. But in fact, GPS is only accurate to within about 1–2 meters. Think about how big 1–2 meters is! If a car were wrong by 1–2 meters, it could be off on the sidewalk hitting things. So we have much more sophisticated mathematical algorithms that help the vehicle localize itself to within 1–2 centimeters.

Path planning is the next step, once we know what the world looks like, and where in it we are. In the path planning phase, we chart a trajectory through the world to get where we want to go. First, we predict what the other vehicles around us will do. Then we decide which maneuver we want to take in response to those vehicles. Finally, we build a trajectory, or path, to execute that maneuver safely and comfortably.

Control is the final step in the pipeline. Once we have the trajectory from our path planning block, the vehicle needs to turn the steering wheel and hit the throttle or the brake, in order to follow that trajectory. If you’ve ever tried to execute a hard turn at a high speed, you know this can get tricky! Sometimes you have an idea of the path you want the car to follow, but actually getting the car to follow that path requires effort. Race car drivers are phenomenal at this, and computers are getting pretty good at it, too!

The video at the beginning of this post covers similar territory, and I hope between that, and what I’ve written here, you have a better sense of how Self-Driving Cars work.

Ready to start learning how to do it yourself? Apply for our Self-Driving Car Engineer Nanodegree program, or enroll in our Intro to Self-Driving Cars Nanodegree program, depending on your experience level, and let’s get started!

Lyft Off in Boston

It used to be there was only one place in the world where any civilian off the street could catch a self-driving car: Pittsburgh, with Uber’s autonomous vehicles.

Now there are two. Maybe.

Lyft has announced it’s running public trials with nuTonomy in Boston, although the word ā€œselectā€ makes me wonder if the trial really is open to anybody:

Today we’re happy to announce the first public self-driving rides available through the Lyft app, powered by nuTonomy’s technology.

This follows through on both companies’ commitment to bring nuTonomy self-driving vehicles to the Lyft network in Boston by the end of the year.

Select passengers in Boston’s Seaport District will be matched with nuTonomy self-driving vehicles when they request rides through the Lyft app.

Pretty exciting!

Tesla Produces Its Own Chips

Tesla hinted at this before, but apparently its long-term plan is to build its own autonomous vehicle chips. They are taking ā€œvertical integrationā€ to a whole new level.

(Interestingly, when I looked up vertical integration on Wikipedia just now, the opening paragraph of the article lists Ford as an example. The more things change, the more they stay the same.)

Elon Musk apparently announced this at an event for AI researchers in Long Beach last week, concurrent with NIPS 2017.

The event was live-tweeted by Stephen Merity, who is worth a read in his own right:

Delphi Automotive Becomes Aptiv

Reuters reporter Paul Lienert scored one of the first post-splinoff interviews with Kevin Clark, the CEO of Aptiv. Aptiv is a spinoff from Delphi, one of the world’s foremost Tier 1 automotive suppliers. The existing Delphi Technologies will retain the core business of automotive supply, whereas Aptiv will focus on autonomous technology.

In this vein, Delphi’s recent acquisition of nuTonomy will live within the Aptiv spinoff.

The split will hopefully resolve some potential tension for Delphi, as its new autonomous business seemed to be increasingly moving toward competition with the customers of its core automotive supply business. By splitting the companies, the legacy Delphi Technologies business may retain its credibility as a supplier, without carrying a side division engaged in competition with key customers.

One of the key insights to come out of the Reuters interview is Kevin Clark’s statement that autonomous technology will drop by orders of magnitude over the next 7 or so years.

While current estimates for the cost of a self-driving hardware and software package range from $70,000 to $150,000, ā€œthe cost of that autonomous driving stack by 2025 will come down to about $5,000 because of technology developments and (higher) volume,ā€ Clark said in an interview.

Delphi is one of the leaders in the development of automotive techology, all the more so with their acquisition of nuTonomy. And their history as a Tier 1 supplier gives them greater insight than most other companies into how costs and production will scale.

So this seems like a prediction to take seriously. And if it comes to pass, that will be a game-changer. At $5000 marginal cost, consumers really could own their own self-driving vehicles, without relying on ride-sharing companies.

Of course, there are a host of reasons why consumers still might not want to own cars in the futureā€Šā€”ā€Šthe costs of mapping, geofences, cratering costs of shared transportation. But $5000 autonomy would make plausible a lot of scenarios that thus far have seemed unlikely.

Self-Driving Cars in Boston

nuTonomy announced a while ago that they would be testing self-driving cars in Boston, but then I kind of lost track of that, especially in the wake of the Delphi acquisition.

Recently WBUR reported that nuTonomy actually already completed its first pilot program in Boston. Seems like it happened under stealth:

Over a two-week trial in November, a select group of volunteers tested out nuTonomy’s self-driving cars in Boston. The participants hailed a ride using the company’s booking app. The trips they took looped around the Seaport District, starting at the company’s Drydock Ave. office and moving onto Summer Street into downtown Boston and back along Congress Street.

Sounds like everything went well and in fact WBUR reports that another Boston company, Optimus Ride, is also testing in Boston.

I used to joke that there’s a reason every self-driving car company is testing in California, Nevada, or Arizonaā€Šā€”ā€Šlots of sun and warmth.

But with Uber in Pittsburgh and these companies in Boston, we’re making small steps to all-weather support for self-driving cars.

How to Solve the Trolley Problem

The Trolley Problem is a favorite conundrum of armchair self-driving car ethicists.

In the original version of the problem, imagine a trolley were running down the rails and about to run over three people tied to the tracks. What if you could throw a switch that would send the trolley down a different track? But what if that track had one person tied down? Would you actually throw the switch to kill one person, even if it meant saving the other three people? Or would you let three people die through inaction?

The self-driving car version of this problem is simpler: what if a self-driving car has to choose between running over a pedestrian, or driving off a cliff and killing the passenger in the vehicle? Whose life is more valuable?

USA Today’s article, ā€œSelf-driving cars will decide who dies in a crashā€ does a reasonable job tackling this issue in-depth, from multiple angles. But the editors didn’t do the article any favors with the headline. It’s not actually self-driving cars that will decide who dies, it’s the humans that design them.

Here’s Sebastian Thrun, my boss and the former head of the Google Self-Driving Car Project, explaining why this isn’t a useful question:

I’ve heard another automotive executive call it ā€œAn impossible problem. You can’t make that decision, so how can you expect a car to solve it?ā€

To be honest, I think of it as an unhelpful problem because we don’t have enough data to know at any given point, with what amount of certainty is the car going to kill anybody. Fatal accidents in self-driving cars haven’t happened yet in any meaningful numbers, so the necessary data doesn’t exist to even work on the problem.

But, I think I’ve come to a conclusion, at least about the hypothetical ethical dilemma:

The car should minimize the number of people who die, by following utilitarian ethics.

This raises some questions about how to value the lives of children versus adults, but I assume some government statistician in the bowels of the Department of Labor has worked that out.

So why should self-driving cars be utilitarian? Because people want them to be.

From USA Today:

Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car ā€œin which they and their family member would be sacrificed for the greater good.ā€

I’ve seen this in a few places now. The general public thinks cars should be designed to minimize fatalities, even if that means sacrificing the passengers. But they don’t want to ride in a car that would sacrifice passengers.

If you believe, as I do, and as Sebastian does, that these scenarios are vanishingly small, then who cares? Give the public what they want. In the exceedingly unlikely scenario that a car has to make this choice, choose the lowest number of fatalities.

And if people don’t want to ride in those cars themselves, they can choose not to. They can drive themselves, but of course that is pretty dangerous, too.

I’ll choose to ride in the self-driving cars.

Literature Review: Apple and Baidu and Deep Neural Networks for Point Clouds

Recently, Apple made what they must have known would be a big splash by silently publishing a research paper with results from a deep neural network that two of their researchers built.

The network and the paper in question were clearly designed for autonomous driving, which Apple has been working on, more or less in secret, for years.

The network in questionā€Šā€”ā€ŠVoxelNetā€Šā€”ā€Šhas been trained to perform object detection on lidar point clouds. This isn’t a huge leap from object detection on images, which has been a topic of deep learning research for several years, but it is a new frontier in deep learning for autonomous vehicles. Kudos to Apple for publishing their results.

VoxelNet (by Apple), draws heavily on two previous efforts at applying deep learning to lidar point clouds, both by Baidu-affiliated researchers. Since the three papers kind of work as a trio, I did a quick scan of them together.

3D Fully Convolutional Network for Vehicle Detection in PointĀ Cloud

Bo Li (Baidu)

Bo Li basically applies the DenseBox fully convolutional network (FCN) architecture to a three-dimensional point cloud.

To do this, Li:

  • Divides the point cloud into voxels. So instead of running 2D pixels through a network, we’re running 3D voxels.
  • Trains an FCN to identify features in the voxel-ized point cloud.
  • Upsamples the FCN to produce two output tensors: an objectness tensor, and a bounding box tensor.
  • The bounding box tensor is probably more interesting for perception purposes. It draws a bounding box around cars on the road.
  • Q.E.D.

Multi-View 3D Object Detection Network for Autonomous Driving

Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, Tian Xia (Tsinghua and Baidu)

A team of Tsinghua and Baidu researchers developed Multi-View 3D (MV3D) networks, which combine lidar and camera images in a complex neural network pipeline.

In contrast to Li’s solo work, which constructs voxels out of the lidar point cloud, MV3D simply takes two separate 2D views of the point cloud: one from the front and one from the top (birds’ eye). MV3D also uses the 2D camera image associated with each lidar scan.

That provides three separate 2D images (lidar front view, lidar top view, camera front view).

MV3D uses each view to create a bounding box in two-dimensions. Birds-eye view lidar created a bounding box parallel to the ground, whereas front-view lidar and camera view each create a 2D bounding box perpendicular to the ground. Combining these 2D bounding boxes creates a 3D bounding box to draw around the vehicle.

At the end of the network, MV3D employs something called ā€œdeep fusionā€ to combine output from each of the three neural network pipelines (one associated with each view). I’ll be honestā€Šā€”ā€ŠI don’t really understand how ā€œdeep fusionā€ works, so leave me a note in the comments if you can follow what they’re doing.

The results are a classification of the object and a bounding box around it.

VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection

Yin Zhou, Oncel Tuzel (Apple)

That brings us to VoxelNet, from Apple, which got so much press recently.

VoxelNet has three components, in order:

  • Feature Learning Network
  • Convolutional Middle Layers
  • Region Proposal Network

The Feature Learning Network seems to be the main ā€œcontribution to knowledgeā€, as the scholars say.

It seems that what this network does is start with a semi-random sample of points from within ā€œinterestingā€ (my word, not theirs) voxels. This sample of points gets run through a fully-connected (not fully-convolutional) network. This network learns point-wise features which are relevant to the voxel from which the points came.

The network, in fact, uses these point-wise features to develop voxel-wise features that describe each of the ā€œinterestingā€ voxels. I’m oversimplifying wildly, but think of this as learning features that describe each voxel and are relevant to classifying the part of the vehicle that is in that voxel. So a voxel might have features like ā€œblackā€, ā€œrubberā€, and ā€œtreadsā€, and so you could guess that the voxel captures part of a tire. Of course, the real features won’t necessarily be intelligible by humans, but that’s the idea.

These voxel-wise features can then get pumped through the Convolutional Middle Layers and finally through the Region Proposal Network and, voila, out come bounding boxes and classifications.


One of the most impressive parts of this line of research is just how new it is. The two Baidu papers were both first published online a year ago, and only made it into conferences in the last six months. The Apple paper only just appeared online in the last couple of weeks.

It’s an exciting time to be building deep neural networks for autonomous vehicles.