The Trough of Disillusionment

Wired alerts the world that self-driving cars have a long way to go:

OK, so you won’t get a fully autonomous car in your driveway anytime soon. Here’s what you can expect, in the next decade or so: Self-driving cars probably won’t operate where you live, unless you’re the denizen of a very particular neighborhood in a big city like San Francisco, New York, or Phoenix. These cars will stick to specific, meticulously mapped areas. If, by luck, you stumble on an autonomous taxi, it will probably force you to meet it somewhere it can safely and legally pull over, instead of working to track you down and assuming hazard lights grant it immunity wherever it stops. You might share that ride with another person or three, à la UberPool.

More precisely, the article is titled, “After Peak Hype, Self-Driving Cars Enter the Trough of Disillusionment”, a reference to the Gartner Hype Cycle.

Color me unconvinced. The Hype Cycle is conceptually interesting, but has been subject to numerous criticisms — most comically, that it is not actually a cycle, and most importantly, that it’s not really accurate.

Maybe self-driving cars will go through a trough of disillusionment, but that hardly seems guaranteed.

My guess, in the next decade a lot of cities will start to get self-driving cars. They probably will stick to specific, meticulously mapped areas, but those geofences will expand over time, as fleets of vehicles share sensor data.

That progress seems exciting, not disillusioning.

Mobile Phone Wars 2.0

Somebody (supposedly Mark Twain) once said, “History does not repeat itself, but it often rhymes”.

A headline I read today about LG breaking into the self-driving car ecosystem made me think about how much the 2017 self-driving car world looks like the 2007 mobile phone world.

Both were on the cusp of huge breakthroughs, but not quite there yet. And a lot of the players are the same.


There are certainly lots of important companies in both ecosystems that are not on this list, but these are just the ones that popped into my head at the moment.

Some of the pure automotive manufacturers and suppliers were mostly absent from the mobile phone wars. Conversely, some of the pure telecommunications companies don’t have much of a presence in the self-driving car race. But in between, a striking number of players remain the same.

The Best ADAS Product on the Market

According to Matthew DeBord from Business Insider, it’s Cadillac Super Cruise, which he rates as better than Tesla Autopilot or Mercedes-Benz Drive Pilot.

Super Cruise was superb, in my limited time with the tech, and when it was willing to operate. It’s a hyper-conservative approach to Level 2 autonomy — the level at which the driver must monitor the system, but can consider taking his or her hands off the wheel while being prepared to resume control when prompted.

Read the whole thing.

The “Convolutional Neural Networks” Lesson

The 8th lesson of the Udacity Self-Driving Car Engineer Nanodegree program is “Convolutional Neural Networks.” This is where students learn to apply deep learning to camera images!

Convolutional neural networks (CNNs) are a special category of deep neural networks that are specifically designed to work with images. CNNs have multiple layers, with each layer connected to the next by “convolutions.”

In practice, what this means is that we slide a patch-like “filter” over the input layer, and the filter applies weights to each artificial neuron in the input layer. The filter connects to a single artificial neuron in the output layer, thereby connecting each neuron in the output layer to a small set of neurons from the input layer.

To make this more concrete, consider this photograph of a dog:

When we run this photograph through a CNN, we’ll slide a filter over the image:

This filter will, broadly speaking, identify basic “features.” It might identify one frame as a curve, and another as a hole:

Curve
Hole

The next layer in the CNN would pass a different filter over a stack of these basic features, and identify more sophisticated features, like a nose:

Nose

The final layer of the CNN is responsible for classifying these increasingly sophisticated features as a dog.

This is of course simplified for the sake of explanation, but hopefully it helps to make the process clear.

One of the more vexing aspects of deep learning is that the actual “features” that a network identifies are not necessarily anything humans would think of as a “curve” or a “nose.” The network learns whatever it needs to learn in order to identify the dog most effectively, but that may not be anything humans can really describe well. Nonetheless, this description gets at the broad scope of how a CNN works.

Once students learn about CNNs generally, it’s time to practice building and training them with TensorFlow. As Udacity founder Sebastian Thrun says, “You don’t lose weight by watching other people exercise.” You have to write the code yourself!

The back half of the lesson covers some deep learning topics applicable to CNNs, like dropout and regularization.

The lesson ends with a lab in which students build and train LeNet, the famous network by Yann LeCun, to identify characters. This is a classic exercise for learning convolutional neural networks, and great way to learn the fundamentals.

Ready to start learning how to build self-driving cars yourself? Great! If you have some experience already, you can apply to our Self-Driving Car Engineer Nanodegree program here, and if you’re just getting started, then we encourage you to enroll in our Intro to Self-Driving Cars Nanodegree program here!

~

Thanks to my former colleague, Dhruv Parthasarathy, who built out this intuitive explanation in even greater detail as part of this lesson!

We’re also grateful to Vincent Vanhoucke, Principal Scientist at Google Brain, who teaches the free Udacity Deep Learning course, from which we drew for this lesson.

Tesla Autopilot Lowers Insurance Premiums

A British automotive insurer has offered to reduce insurance premiums 5% for drivers who turn on Autopilot. The insurer, Direct Line, says it doesn’t yet actually know with certainty whether Autopilot makes cars safer.

Direct Line said it was too early to say whether the use of the autopilot system produced a safety record that justified lower premiums. It said it was charging less to encourage use of the system and aid research.

But I have to imagine Direct Line believes Autopilot will make cars safer, even if it doesn’t know that for sure. After all, they’re not offering 5% off to customers who drive blindfolded, on the theory that they need more research on that topic.

Although Direct Line is a UK company, the financial angle of autonomous systems ties in closely tactics that the US government has used in the past. Famously, the federal government did not directly mandate a drinking age of 21, but rather tied federal highway funds to whether states raised their drinking age to 21.

I can imagine a future scenario in which the government doesn’t mandate the use of autonomous vehicles, but rather a combination of governmental and insurance incentives push drivers gently or not-so-gently toward taking their hands off the wheel.

Say Hello in Detroit

Next month I’ll be checking off a common bucket list item by visiting Michigan in January. Most people go for the weather, but I in fact am going for the North American International Auto Show.

I tease, of course, but I truly am excited to be heading back to Motor City, and especially for America’s largest auto show.

On Wednesday, January 17, I’ll be speaking on a panel at Automobili-D, the tech section of the show, and I’ll be in town with some Udacity colleagues through the weekend.

Drop me a note at david.silver@udacity.com and I’d love to say hello. It’s always amazing to head to the center of the automotive world. In many ways it reminds me of how cool it was to visit Silicon Valley when I was a software engineer in Virginia, living outside the center of the software world.

We’ll be holding at least one and maybe a few events for Udacity students, potential students, and partners, and I’ll be announcing those here as we nail them down.

See you in Detroit!

Is Boston the Next Pittsburgh?

That’s gotta be a rough headline for Patriots fans 😛

For years, autonomous vehicle development in the US has happened primarily in three locations: Detroit, Silicon Valley, and Pittsburgh.

Detroit because it’s the center of the US automotive industry, Silicon Valley because it’s the center of the US technology industry, and Pittsburgh because…why?

Basically because Pittsburgh is home to the vaunted Carnegie Mellon University Robotics Institute, which counts among its alumni such robotic luminaries as Red Whittaker, Sebastian Thrun, and Chris Urmson. Researchers from the Robotics Institute were famously lured away en masse by Uber, but the academic center appears to have recovered, and the net result has been to make Pittsburgh the home of not only Uber ATG, but also other autonomous vehicle companies like Argo AI and Aptiv.

Here’s a quick readout of the job counts for “autonomous vehicle” on Indeed.com right now:

Mountain View (Silicon Valley): 446
Detroit: 226
Pittsburgh: 86
Boston: 86

So what’s up with Boston?

Partly nuTonomy, which Aptiv (formerly Delphi) purchased for a rumored $450 MM. And of course MIT and their own vaunted Computer Science and Artificial Intelligence Lab (CSAIL).

But further inspection shows Boston potentially has a more robust autonomous vehicle industry than Pittsburgh. Indeed.com shows essentially all Pittsburgh’s autonomous vehicle jobs coming from three companies: Aptiv, Argo, and Uber.

On the other hand, Boston’s autonomous vehicle jobs come from: Square Robot, Liberty Mutual, nuTonomy, Draper, MathWorks, Aurora, Optimus Ride, Lux Research, and the list goes on. That’s a diversified and presumably robust jobs base. Plus, Aptiv just announced a new Boston-based autonomous technology center.

Keep an eye on Beantown.

How Self-Driving Cars Work

Earlier this fall I spoke about how self-driving cars work at TEDxWilmington’s Transportation Salon, which was a lot of fun.

The frame for my talk was a collection of projects students have done as part of the Udacity Self-Driving Car Engineer Nanodegree Program.

So, how do self-driving cars work?

Glad you asked!

Self-driving cars have five core components:

  1. Computer Vision
  2. Sensor Fusion
  3. Localization
  4. Path Planning
  5. Control

Computer vision is how we use cameras to see the road. Humans demonstrate the power of vision by handling a car with basically just two eyes and a brain. For a self-driving car, we can use camera images to find lane lines, or track other vehicles on the road.

Sensor fusion is how we integrate data from other sensors, like radar and lasers—together with camera data—to build a comprehensive understanding of the vehicle’s environment. As good as cameras are, there are certain measurements — like distance or velocity — at which other sensors excel, and other sensors can work better in adverse weather, too. By combining all of our sensor data, we get a richer understanding of the world.

Localization is how we figure out where we are in the world, which is the next step after we understand what the world looks like. We all have cellphones with GPS, so it might seem like we know where we are all the time already. But in fact, GPS is only accurate to within about 1–2 meters. Think about how big 1–2 meters is! If a car were wrong by 1–2 meters, it could be off on the sidewalk hitting things. So we have much more sophisticated mathematical algorithms that help the vehicle localize itself to within 1–2 centimeters.

Path planning is the next step, once we know what the world looks like, and where in it we are. In the path planning phase, we chart a trajectory through the world to get where we want to go. First, we predict what the other vehicles around us will do. Then we decide which maneuver we want to take in response to those vehicles. Finally, we build a trajectory, or path, to execute that maneuver safely and comfortably.

Control is the final step in the pipeline. Once we have the trajectory from our path planning block, the vehicle needs to turn the steering wheel and hit the throttle or the brake, in order to follow that trajectory. If you’ve ever tried to execute a hard turn at a high speed, you know this can get tricky! Sometimes you have an idea of the path you want the car to follow, but actually getting the car to follow that path requires effort. Race car drivers are phenomenal at this, and computers are getting pretty good at it, too!

The video at the beginning of this post covers similar territory, and I hope between that, and what I’ve written here, you have a better sense of how Self-Driving Cars work.

Ready to start learning how to do it yourself? Apply for our Self-Driving Car Engineer Nanodegree program, or enroll in our Intro to Self-Driving Cars Nanodegree program, depending on your experience level, and let’s get started!

Lyft Off in Boston

It used to be there was only one place in the world where any civilian off the street could catch a self-driving car: Pittsburgh, with Uber’s autonomous vehicles.

Now there are two. Maybe.

Lyft has announced it’s running public trials with nuTonomy in Boston, although the word “select” makes me wonder if the trial really is open to anybody:

Today we’re happy to announce the first public self-driving rides available through the Lyft app, powered by nuTonomy’s technology.

This follows through on both companies’ commitment to bring nuTonomy self-driving vehicles to the Lyft network in Boston by the end of the year.

Select passengers in Boston’s Seaport District will be matched with nuTonomy self-driving vehicles when they request rides through the Lyft app.

Pretty exciting!

Tesla Produces Its Own Chips

Tesla hinted at this before, but apparently its long-term plan is to build its own autonomous vehicle chips. They are taking “vertical integration” to a whole new level.

(Interestingly, when I looked up vertical integration on Wikipedia just now, the opening paragraph of the article lists Ford as an example. The more things change, the more they stay the same.)

Elon Musk apparently announced this at an event for AI researchers in Long Beach last week, concurrent with NIPS 2017.

The event was live-tweeted by Stephen Merity, who is worth a read in his own right: