Tesla Autopilot Lowers Insurance Premiums

A British automotive insurer has offered to reduce insurance premiums 5% for drivers who turn on Autopilot. The insurer, Direct Line, says it doesn’t yet actually know with certainty whether Autopilot makes cars safer.

Direct Line said it was too early to say whether the use of the autopilot system produced a safety record that justified lower premiums. It said it was charging less to encourage use of the system and aid research.

But I have to imagine Direct Line believes Autopilot will make cars safer, even if it doesn’t know that for sure. After all, they’re not offering 5% off to customers who drive blindfolded, on the theory that they need more research on that topic.

Although Direct Line is a UK company, the financial angle of autonomous systems ties in closely tactics that the US government has used in the past. Famously, the federal government did not directly mandate a drinking age of 21, but rather tied federal highway funds to whether states raised their drinking age to 21.

I can imagine a future scenario in which the government doesn’t mandate the use of autonomous vehicles, but rather a combination of governmental and insurance incentives push drivers gently or not-so-gently toward taking their hands off the wheel.

Say Hello in Detroit

Next month I’ll be checking off a common bucket list item by visiting Michigan in January. Most people go for the weather, but I in fact am going for the North American International Auto Show.

I tease, of course, but I truly am excited to be heading back to Motor City, and especially for America’s largest auto show.

On Wednesday, January 17, I’ll be speaking on a panel at Automobili-D, the tech section of the show, and I’ll be in town with some Udacity colleagues through the weekend.

Drop me a note at david.silver@udacity.com and I’d love to say hello. It’s always amazing to head to the center of the automotive world. In many ways it reminds me of how cool it was to visit Silicon Valley when I was a software engineer in Virginia, living outside the center of the software world.

We’ll be holding at least one and maybe a few events for Udacity students, potential students, and partners, and I’ll be announcing those here as we nail them down.

See you in Detroit!

Is Boston the Next Pittsburgh?

That’s gotta be a rough headline for Patriots fans 😛

For years, autonomous vehicle development in the US has happened primarily in three locations: Detroit, Silicon Valley, and Pittsburgh.

Detroit because it’s the center of the US automotive industry, Silicon Valley because it’s the center of the US technology industry, and Pittsburgh because…why?

Basically because Pittsburgh is home to the vaunted Carnegie Mellon University Robotics Institute, which counts among its alumni such robotic luminaries as Red Whittaker, Sebastian Thrun, and Chris Urmson. Researchers from the Robotics Institute were famously lured away en masse by Uber, but the academic center appears to have recovered, and the net result has been to make Pittsburgh the home of not only Uber ATG, but also other autonomous vehicle companies like Argo AI and Aptiv.

Here’s a quick readout of the job counts for “autonomous vehicle” on Indeed.com right now:

Mountain View (Silicon Valley): 446
Detroit: 226
Pittsburgh: 86
Boston: 86

So what’s up with Boston?

Partly nuTonomy, which Aptiv (formerly Delphi) purchased for a rumored $450 MM. And of course MIT and their own vaunted Computer Science and Artificial Intelligence Lab (CSAIL).

But further inspection shows Boston potentially has a more robust autonomous vehicle industry than Pittsburgh. Indeed.com shows essentially all Pittsburgh’s autonomous vehicle jobs coming from three companies: Aptiv, Argo, and Uber.

On the other hand, Boston’s autonomous vehicle jobs come from: Square Robot, Liberty Mutual, nuTonomy, Draper, MathWorks, Aurora, Optimus Ride, Lux Research, and the list goes on. That’s a diversified and presumably robust jobs base. Plus, Aptiv just announced a new Boston-based autonomous technology center.

Keep an eye on Beantown.

How Self-Driving Cars Work

Earlier this fall I spoke about how self-driving cars work at TEDxWilmington’s Transportation Salon, which was a lot of fun.

The frame for my talk was a collection of projects students have done as part of the Udacity Self-Driving Car Engineer Nanodegree Program.

So, how do self-driving cars work?

Glad you asked!

Self-driving cars have five core components:

  1. Computer Vision
  2. Sensor Fusion
  3. Localization
  4. Path Planning
  5. Control

Computer vision is how we use cameras to see the road. Humans demonstrate the power of vision by handling a car with basically just two eyes and a brain. For a self-driving car, we can use camera images to find lane lines, or track other vehicles on the road.

Sensor fusion is how we integrate data from other sensors, like radar and lasers—together with camera data—to build a comprehensive understanding of the vehicle’s environment. As good as cameras are, there are certain measurements — like distance or velocity — at which other sensors excel, and other sensors can work better in adverse weather, too. By combining all of our sensor data, we get a richer understanding of the world.

Localization is how we figure out where we are in the world, which is the next step after we understand what the world looks like. We all have cellphones with GPS, so it might seem like we know where we are all the time already. But in fact, GPS is only accurate to within about 1–2 meters. Think about how big 1–2 meters is! If a car were wrong by 1–2 meters, it could be off on the sidewalk hitting things. So we have much more sophisticated mathematical algorithms that help the vehicle localize itself to within 1–2 centimeters.

Path planning is the next step, once we know what the world looks like, and where in it we are. In the path planning phase, we chart a trajectory through the world to get where we want to go. First, we predict what the other vehicles around us will do. Then we decide which maneuver we want to take in response to those vehicles. Finally, we build a trajectory, or path, to execute that maneuver safely and comfortably.

Control is the final step in the pipeline. Once we have the trajectory from our path planning block, the vehicle needs to turn the steering wheel and hit the throttle or the brake, in order to follow that trajectory. If you’ve ever tried to execute a hard turn at a high speed, you know this can get tricky! Sometimes you have an idea of the path you want the car to follow, but actually getting the car to follow that path requires effort. Race car drivers are phenomenal at this, and computers are getting pretty good at it, too!

The video at the beginning of this post covers similar territory, and I hope between that, and what I’ve written here, you have a better sense of how Self-Driving Cars work.

Ready to start learning how to do it yourself? Apply for our Self-Driving Car Engineer Nanodegree program, or enroll in our Intro to Self-Driving Cars Nanodegree program, depending on your experience level, and let’s get started!

Lyft Off in Boston

It used to be there was only one place in the world where any civilian off the street could catch a self-driving car: Pittsburgh, with Uber’s autonomous vehicles.

Now there are two. Maybe.

Lyft has announced it’s running public trials with nuTonomy in Boston, although the word “select” makes me wonder if the trial really is open to anybody:

Today we’re happy to announce the first public self-driving rides available through the Lyft app, powered by nuTonomy’s technology.

This follows through on both companies’ commitment to bring nuTonomy self-driving vehicles to the Lyft network in Boston by the end of the year.

Select passengers in Boston’s Seaport District will be matched with nuTonomy self-driving vehicles when they request rides through the Lyft app.

Pretty exciting!

Tesla Produces Its Own Chips

Tesla hinted at this before, but apparently its long-term plan is to build its own autonomous vehicle chips. They are taking “vertical integration” to a whole new level.

(Interestingly, when I looked up vertical integration on Wikipedia just now, the opening paragraph of the article lists Ford as an example. The more things change, the more they stay the same.)

Elon Musk apparently announced this at an event for AI researchers in Long Beach last week, concurrent with NIPS 2017.

The event was live-tweeted by Stephen Merity, who is worth a read in his own right:

Delphi Automotive Becomes Aptiv

Reuters reporter Paul Lienert scored one of the first post-splinoff interviews with Kevin Clark, the CEO of Aptiv. Aptiv is a spinoff from Delphi, one of the world’s foremost Tier 1 automotive suppliers. The existing Delphi Technologies will retain the core business of automotive supply, whereas Aptiv will focus on autonomous technology.

In this vein, Delphi’s recent acquisition of nuTonomy will live within the Aptiv spinoff.

The split will hopefully resolve some potential tension for Delphi, as its new autonomous business seemed to be increasingly moving toward competition with the customers of its core automotive supply business. By splitting the companies, the legacy Delphi Technologies business may retain its credibility as a supplier, without carrying a side division engaged in competition with key customers.

One of the key insights to come out of the Reuters interview is Kevin Clark’s statement that autonomous technology will drop by orders of magnitude over the next 7 or so years.

While current estimates for the cost of a self-driving hardware and software package range from $70,000 to $150,000, “the cost of that autonomous driving stack by 2025 will come down to about $5,000 because of technology developments and (higher) volume,” Clark said in an interview.

Delphi is one of the leaders in the development of automotive techology, all the more so with their acquisition of nuTonomy. And their history as a Tier 1 supplier gives them greater insight than most other companies into how costs and production will scale.

So this seems like a prediction to take seriously. And if it comes to pass, that will be a game-changer. At $5000 marginal cost, consumers really could own their own self-driving vehicles, without relying on ride-sharing companies.

Of course, there are a host of reasons why consumers still might not want to own cars in the future — the costs of mapping, geofences, cratering costs of shared transportation. But $5000 autonomy would make plausible a lot of scenarios that thus far have seemed unlikely.

The “Deep Neural Networks” Lesson

Lesson 7 of the Udacity Self-Driving Car Engineer Nanodegree Program is “Deep Neural Networks”

I am continuing on my quest to write a post detailing every one of the 67 projects that currently comprise our Self-Driving Car Engineer Nanodegree program curriculum, and today, we look at the “Deep Neural Networks” lesson!

Students actually start learning about deep neural networks prior to this lesson, but this is the lesson where students begin to implement deep neural networks in TensorFlow, Google’s deep learning framework.

In the previous lesson, “Introduction to TensorFlow,” students learned to use TensorFlow to build linear models, like linear or logistic regression. In the “Deep Neural Networks” lesson, students learn new techniques in TensorFlow, to build up these models into neural networks.

Some of the most important foundational blocks of neural networks are demonstrated in TensorFlow.

  • Activation functions help neural networks represent non-linear models
  • Backpropagation trains neural networks from real data quickly and efficiently
  • Dropout removes neurons randomly during training to prevent overfitting the training data, which makes the model more accurate on new data

Students also learn some practical skills, like how to save and restore models in TensorFlow.

Future lessons take these basic skills and help students apply them to important problems for autonomous vehicles, like how to recognize traffic signs.

Self-Driving Cars in Boston

nuTonomy announced a while ago that they would be testing self-driving cars in Boston, but then I kind of lost track of that, especially in the wake of the Delphi acquisition.

Recently WBUR reported that nuTonomy actually already completed its first pilot program in Boston. Seems like it happened under stealth:

Over a two-week trial in November, a select group of volunteers tested out nuTonomy’s self-driving cars in Boston. The participants hailed a ride using the company’s booking app. The trips they took looped around the Seaport District, starting at the company’s Drydock Ave. office and moving onto Summer Street into downtown Boston and back along Congress Street.

Sounds like everything went well and in fact WBUR reports that another Boston company, Optimus Ride, is also testing in Boston.

I used to joke that there’s a reason every self-driving car company is testing in California, Nevada, or Arizona — lots of sun and warmth.

But with Uber in Pittsburgh and these companies in Boston, we’re making small steps to all-weather support for self-driving cars.

The “Introduction to TensorFlow” Lesson

The sixth lesson of the Udacity Self-Driving Car Engineer Nanodegree Program is “Introduction to TensorFlow.”

TensorFlow is Google’s library for deep learning, and one of the most popular tools for building and training deep neural networks. In the previous lesson, MiniFlow, students build their own miniature versions of a deep learning library. But for real deep learning work, an industry-standard library like TensorFlow is essential.

This lesson combines videos from Vincent Vanhoucke’s free Udacity Deep Learning course with new material we have added to support installing and working with TensorFlow.

Students learn the differences between regression and classification problems. Then they to build a logistic classifier in TensorFlow. Finally, students use fundamental techniques like activation functions, one-hot encoding, and cross-entropy loss to train feedforward networks.

Most of these topics are already familiar to students from the previous “Introduction to Neural Networks” and “MiniFlow” lessons, but implementing them in TensorFlow is a whole new animal. This lesson provides lots of quizzes and solutions demonstrating how to do that.

Towards the end of the lesson, students walk through a quick tutorial on using GPU-enabled AWS EC2 instances to train deep neural networks. Thank you to our friends at AWS Educate for providing free credits to Udacity students to use for training neural networks!

Deep learning has been around for a long time, but it has only really taken off in the last five years because of the ability to use GPUs to dramatically accelerate the training of neural networks. Students who have their own high-performance GPUs are able to experience this acceleration locally. But many students do not own their own GPUs, and AWS EC2 instances are a cloud tool for achieving the same results from anywhere.

The lesson closes with a lab in which students use TensorFlow to perform the classic deep learning exercise of classifying characters: ‘A’, ‘B’, ‘C’ and so on.