Udacity and Baidu’s New Course: Introduction to Apollo

At CES, Udacity and Baidu jointly announced a free intro course for the Apollo self-driving platform!

Big news from CES! Udacity is going to produce a one-month free course on developing self-driving car software with Baidu Apollo! Sebastian Thrun, Udacity’s founder (and the father of the self-driving car), announced this together with Baidu COO Qi Lu at CES today.

Apollo

Baidu has open-sourced their self-driving car software stack, Apollo, with the goal of creating the “Android of the autonomous driving industry”.

Apollo has four layers:

  • Cloud Service
  • Apollo Open Software Stack
  • Reference Hardware Platform
  • Reference Vehicle Platform

Udacity’s upcoming “Intro to Apollo” course will focus on the top two layers: Cloud Service and Apollo Open Software Stack.

The Course

Apollo is an incredibly exciting platform in the autonomous vehicle industry. We are thrilled to work with the Apollo team to teach students and engineers around the world how to build self-driving car software quickly using the Apollo stack.

I am especially delighted that this will be a free course, open to anyone with the desire to enter this amazing field. There is a huge demand for knowledge about how self-driving cars work, and this course will help educate the world on this topic. Our Self-Driving Car Engineer Nanodegree Program is an intense nine-month journey to becoming a self-driving car engineer, and it offers an amazing learning experience, but it is for advanced engineers. And while our Intro to Self-Driving Cars Nanodegree program is an excellent point-of-entry for aspiring learners newer to the field, it offers an equally immersive experience. This course offers adds something new and important to the range of learning options.

China

This is a special opportunity for us to collaborate with Baidu, one of the leading companies in China. China is a leader in the autonomous vehicle industry. And Chinese students currently make up 5% of enrollment in Udacity’s Self-Driving Car Engineer Nanodegree Program, and 20% of enrollments in all Udacity programs. A major focus for our Self-Driving Car Program in 2018 is to reach even more students in China.

The course will be developed jointly by Baidu’s Apollo team, the Udacity Self-Driving Car team in Mountain View, and the Udacity China team. The course will be in English, but this is a new experiment for us in developing course material in one of our offices outside of the US. I’m excited.

Did I mention I’m excited about this course? Because I’m excited!

Miles Driven

Uber ATG recently announced that its fleet has driven 2 million miles autonomously.

In November, Waymo announced that its fleet had logged 4 million miles.

Cruise has been a little more circumspect, but GM CEO Mary Barra has stated “hundreds of thousands of complex urban miles”.

If you put Tesla Autopilot into the mix, they appear to be in the hundreds of millions of miles, if not billions by now. It’s not exactly a fair comparison, since Autopilot is SAE Level 2, whereas Waymo, Uber ATG, and Cruise are all testing Level 4 systems. But it does show the data-collection benefit of getting a real production fleet out on the road.

So, in an industry where miles driven are the name of the game, we have:

Telsa: ~1 billion
Waymo: 4 million
Uber: 2 million
Cruise: ~0.5 million

It’s not clear who’s next — maybe nuTonomy? — but it seems like those are the pacesetters.

Aurora Signs Contracts

Since Chris Urmson departed his perch as the head of Google’s Self-Driving Car Project in August, 2016, he’s flown pretty well below the radar. Urmson strikes me as an understated guy to begin with, so perhaps that’s not too surprising.

But his startup, Aurora Innovation, just announced partnerships to power self-driving cars for both Volkswagen and Hyundai:

“Both the VW Group (the parent company to VW, Audi, and many others) and Hyundai are working with Aurora to build the startup’s self-driving technology into some of their vehicles, with the ultimate goal of creating fleets that will be available on demand in cities. Each company says it plans to make Aurora-powered vehicles commercially available by 2021, and both will start testing prototypes later this year.”

Of particular note to me at Udacity, his LinkedIn profile says “we’re hiring” and The Verge reports:

“The war for talent right now is incredible,” Urmson says. “But my experience is that if you treat people well with respect and empower them to solve interesting problems, and you give them a good mission to work at, [keeping the talent] takes care of itself.”

Self-Driving Car Predictions for 2018

Here’s what I think the coming year has in store for self-driving cars.

100% Certain

No Level 5 self-driving cars will be deployed anywhere in the world.
No GPS or DGPS system will reliably exceed 10cm localization accuracy on all public roads in the US.

90% Certain

Level 4 self-driving cars will be available to the general public, on public roads, somewhere in the world.
Deep learning will remain the dominant tool for image classification.
No US road will have a speed limit for autonomous vehicles that is faster than the speed limit for human-driven vehicles.

80% Certain

All Level 4 vehicles available to the general public will use lidar.
Somebody will die in a crash while due to a failure of Tesla Autopilot.
Waymo will still have driven more autonomous miles than any other company.
Level 4 self-driving cars will be available to the general public somewhere other than Pittsburgh.
A company will be acquired primarily for its autonomous vehicle capabilities with a valuation above $100M USD.
No dominant technique will emerge for urban motion planning.

70% Certain

Level 4 self-driving cars will be available to the general public in Pittsburgh.
Level 4 self-driving cars will be available to the general public somewhere in China.
Tesla will sell the most advanced self-driving system available to the general public.
Deep learning will not be the dominant tool for object classification from point clouds.

60% Certain

2,000 students will have graduated the Udacity Self-Driving Car Engineer Nanodegree Program.
Level 4 self-driving cars will be available to the general public somewhere in Europe.
Waymo will have exceeded 10 million miles driven.
Tesla will produce 5,000 Model 3 vehicles in a single calendar week.
No member of the general public will die in a Level 4 autonomous vehicle.

50% Certain

Cruise Automation will open its Level 4 fleet to the general public.
Level 3 self-driving cars will be available for purchase by the general public.
A company will be acquired primarily for its autonomous vehicle capabilities with a valuation above $1B USD.
1,000 Udacity students will have jobs in the autonomous vehicle industry.
Self-driving cars will be legal for public use somewhere in India.


In a year, I’ll score these and see how I did!

Take a look at my 2017 predictions and scoring.

How Udacity Students Build Path Planners for Highway Driving

Many students describe the Path Planning Project as the most challenging project in the entire Udacity Self-Driving Car Engineer Nanodegree program. This is understandable. Path planning is hard! But it’s not too hard, and I’m going to tell you a bit about the project—and about path planning in general—in this post.

There are three core components to path planning: 1) Predicting what other vehicles on the road will do next, 2) Deciding on a maneuver to execute, in response to our own goals, and to our predictions about other vehicles, and 3) Building a trajectory to execute the maneuver we decide on.

This is a project that provides students a lot of freedom in how to implement their solution. Here are five approaches from our amazing students!

Reflections on Designing a Virtual Highway Path Planner (Part 1/3)

Mithi

I love Mithi’s series of posts on the Path Planning Project. Her first post covers the project outline and her solution design process. The second post covers her data structures and pipeline. The third and final post dives into the mechanics and math required to actually produce a path. This is a great series of posts for anybody thinking about building a path planner.

The goal is to create a path planning pipeline that would smartly, safely, and comfortably navigate a virtual car around a virtual highway with other traffic. We are given a map of the highway, as well as sensor fusion and localization data about our car and nearby cars .We are supposed to give back a set of points (x , y)in a map that a perfect controller will execute every 0.02 seconds. Navigating safely and comfortably means we don’t bump into other cars, and we don’t exceed the maximum speed, acceleration, and jerk requirements. Navigating smartly means we change lanes when the car in front of us is too slow.

Path Planning in Highways for an Autonomous Vehicle

Mohan Karthik

In contrast to Mithi’s articles, which take you through her process of building a path planner, Mohan’s writeup does a great job of describing the final result. In particular, I was interested to read about the voting system he used for deciding on lane changes.

Instead of quoting Mohan, I’ll share the flowchart he built:

Self-Driving Car Engineer Diary — 11

Andrew Wilkie

This installment of Andrew’s long-running “diary” covers the Path Planning Project at a high level, and details how it fits into the third term of the Nanodegree program. Like his fellow classmates, Andrew also found this to be a challenging project.

I found the path planning project challenging, in large part due to fact that we are implementing SAE Level 4 functionality in C++ and the complexity that comes with the interactions required between the various modules.

Udacity Self Driving Car — Project 11 Path planning

Shyam Jagannathan

Shyam’s post contains a particularly concise 6-point walkthrough of trajectory generation, which is both fundamental to building a path planner, and surprisingly challenging.

The trajectory generation part which is the most difficult is covered as part of the project walk-through by Aaron Brown and David Silver. LINK. They recommend using the open source C++ tk:spline() method to generate a 5th degree polynomial which help minimize jerk while accelerating, decelerating and changing lanes.

Highway Path Planning

Alena Kastsiukavets

Alena touches on several interesting points with her post. She focuses on cost functions, which she identifies as the most important part of the project. The post describes her finite state machine and the associated cost functions in detail, and describes how the car decides when to shift lanes. She also touches on how she merged the two branches of her path planner — one for the Nanodegree project, and one for the Bosch Challenge — to create a more generalized planner.

The most important part of the project is to define cost functions. Let’s say we are on a highway with no other cars around. In such situation, we should stay in the same lane. Constant lane changing is extremely uncomfortable, so my first cost function is change_lane_cost. We penalized our trajectory if we want to change lane. Honestly, I did a small trick for Bosch challenge. I did not penalize the trajectory if I want to move in the middle lane. It gives me more freedom with maneuvers. Otherwise, I can be stuck in the left-most and right-most lanes when my lane and middle lanes are busy.

Seeing our students working through these challenges, experiencing their solutions, and learning about their processes fills me with so much excitement about the future of this field—these students represent the next generation of self-driving car engineers, and based on the work they’re already doing, I am certain they’re going to be making incredible contributions. I am especially moved by their generosity in taking the time to share in such detail the work they’re engaged in, and it’s a real pleasure to share their articles with you.

~

Ready to start working on self-driving cars yourself? Apply for our Udacity Self-Driving Car Engineer Nanodegree program today!

Velodyne Lidar Price Reduction

CNET has breaking news that Velodyne is halving the price of its popular VLP-16 lidar sensor, from $8,000 to $4,000.

I haven’t seen this reported anywhere else, and even if it’s true it’s not a huge deal on its own. But it would be a sign of potentially huge changes ahead.

Why?

  1. Lidar is a critical sensor for every major autonomous vehicle company (except Tesla).
  2. Lidar is by far the most expensive sensor on the vehicle. The VLP-16 is “only” $8,000, but Velodyen’s top-end HDL-64E retails for about $100,000. And even at that price, historically there has been a multi-month backlog!
  3. Velodyne, the industry’s leading lidar manufacturer, has been ramping up a “megafactory” in San Jose.
  4. It’s possible that the VLP price drop is an early sign of a huge increase in the supply of lidar sensors onto the market.
  5. If that’s true, then the price of all lidar sensors might drop and keep dropping.
  6. Lower lidar prices could lead to safer cars, as it becomes affordable to put more sensors on the vehicle.
  7. Lower lidar prices could lead to lower self-driving car prices, challenging the notion that only ridesharing companies can afford these vehicles.

That’s a long string of events, and it starts with an unconfirmed report from CNET. But it’s fun to think about.

As a disclaimer, Udacity uses Velodyne sensors, including the VLP-16, on Carla, Udacity’s self-driving car.

Scoring My 2017 Predictions

Exactly one year ago, I laid out a number of predictions for the self-driving car world in 2017, along with associated percentages (I lifted the percentages idea from Scott Alexander). Probably I should’ve written this post yesterday, on the last day of 2017, but better late than never.

Here are my predictions from one year ago, scored. Bold font means the prediction was correct. Plain font means the prediction was incorrect.

  1. 1 Udacity Self-Driving Car Engineer Nanodegree Program student will start a new, permanent, full-time autonomous vehicle job: 99%
  2. Level 4 self-driving cars will be available for ridesharing on public roads somewhere in the world: 90%
  3. Level 4 self-driving cars will be available for ridesharing on public roads somewhere in the United States: 90%
  4. Ford will not push back its 2021 target launch for Level 4 vehicles: 90%
    [UPDATE: Ford CEO Jim Hackett has
    walked this back a little, but officially Ford is still committed to 2021.]
  5. 100 Udacity Self-Driving Car Engineer Nanodegree Program students will start new, permanent, full-time autonomous vehicle jobs: 90%
  6. No US highway will have a speed limit for autonomous vehicles that is faster than the speed limit for human-driven vehicles: 90%
  7. 1000 Udacity Self-Driving Car Engineer Nanodegree Program students will start new autonomous vehicle jobs (possibly contract roles or internships): 80%
    [UPDATE: A year ago I did not appreciate how hard it would be for us to measure this. If you are a student, at some point Udacity will email you a survey about your job search. Please respond! In the meantime, we are looking into more automatic ways to measure students-in-jobs.]
  8. Level 4 self-driving cars will be available for ridesharing on public roads in Singapore: 80%
    [UPDATE: As far as I can tell, nuTonomy is continuing to beta test their service in Singapore, with a target public launch of Q2 2018. Although the prediction was not precise, I am not going to count the current service, because, as far as I know, it’s not open to the general public.]
  9. A company will be acquired primarily for its autonomous vehicle capabilities with a valuation above $100 MM USD: 80%
  10. No company will sell a vehicle with autonomous technology that exceeds what Tesla offers: 80%
    [UPDATE: Although Business Insider just ranked
    GM Super Cruise ahead of Tesla Autopilot, it seems close enough and subjective enough that I score them as essentially tied.]
  11. At least 1 company that has not done so before will complete a continuous US coast-to-coast demonstration trip in a fully autonomous vehicle: 80%
    [UPDATE:
    Torc Robotics.]
  12. Level 4 self-driving cars will be available for ridesharing on public roads in California: 70%
  13. Conditional on Level 3 self-driving cars being legal somewhere, Tesla will enable Level 3 autonomy in that location: 70%
    [UPDATE: Many (most?) US states have not specifically regulated self-driving cars, leading some lawyers to believe that there are no restrictions on self-driving cars in those jurisdictions. So I count this one as wrong, because Level 3 is presumably legal somewhere, but Tesla has not enabled it.]
  14. Google will offer rides to the public in its self-driving cars: 70%
    [UPDATE: Same logic as Singapore, above.]
  15. No member of the general public will die in a Level 4 vehicle: 70%
  16. 2500 Udacity Self-Driving Car Engineer Nanodegree Program students start new autonomous vehicle jobs (possibly contract roles or internships): 60%
  17. Level 4 self-driving cars will be available for ridesharing on public roads somewhere in Europe: 60%
  18. Level 4 self-driving cars will not be available for ridesharing on public roads somewhere in China: 60%
  19. No company will be acquired primarily for its autonomous vehicle capabilities with a valuation above $500 MM USD: 60%
    [UDPATE: Mobileye. Oops.]
  20. A new autonomous vehicle startup will form and raise money at a valuation above $75 MM USD: 60%
    [UPDATE:
    Argo AI]
  21. Ford will commit to launching Level 4 vehicles before 2021: 50%
  22. Level 4 self-driving cars will be available for ridesharing on public roads somewhere in the world, without a safety driver: 50%
    [UPDATE: Same logic as Singapore, above.]
  23. Level 3 self-driving cars will be available for private ownership somewhere in the United States: 50%
  24. Somebody will die in a Tesla Autopilot crash: 50%
  25. Level 4 autonomous vehicles will be available for public ridesharing when snow is on the ground: 50%

99% confidence: 1 / 1 (100% correct)

90% confidence: 5 / 5 (100% correct)

80% confidence: 3/ 5 (60% correct)

70% confidence: 1 / 4 (25% correct)

60% confidence: 2 / 5 (40% correct)

50% confidence: 0 / 5 (0% correct)

Statistically, based on my confidence assessments, about 18 of the predictions should have been correct. In fact, only 12 were correct. So I was overconfident in my ability to predict the future.

Mostly, I appear to have been too optimistic, although in some cases (i.e. fatalities) I was too pessimistic. A few of my predictions were also not precise enough.

Some important stories from 2017 do not show up in my predictions:

  • The Waymo v. Uber lawsuit.
  • The success of Cruise Automation in building a Level 4 fleet.
  • Tesla’s manufacturing difficulties.
  • The 10,000+ students who have enrolled in the Nanodegree Program, many of whom will get jobs in the industry in 2018.
  • Voyage spinning out from Udacity.
  • The growing importance of lidar.
  • Investments by Didi, Baidu, and Chinese companies generally.
  • NVIDIA’s dominance of the self-driving car chip market.

2018 predictions coming tomorrow.

The Trough of Disillusionment

Wired alerts the world that self-driving cars have a long way to go:

OK, so you won’t get a fully autonomous car in your driveway anytime soon. Here’s what you can expect, in the next decade or so: Self-driving cars probably won’t operate where you live, unless you’re the denizen of a very particular neighborhood in a big city like San Francisco, New York, or Phoenix. These cars will stick to specific, meticulously mapped areas. If, by luck, you stumble on an autonomous taxi, it will probably force you to meet it somewhere it can safely and legally pull over, instead of working to track you down and assuming hazard lights grant it immunity wherever it stops. You might share that ride with another person or three, à la UberPool.

More precisely, the article is titled, “After Peak Hype, Self-Driving Cars Enter the Trough of Disillusionment”, a reference to the Gartner Hype Cycle.

Color me unconvinced. The Hype Cycle is conceptually interesting, but has been subject to numerous criticisms — most comically, that it is not actually a cycle, and most importantly, that it’s not really accurate.

Maybe self-driving cars will go through a trough of disillusionment, but that hardly seems guaranteed.

My guess, in the next decade a lot of cities will start to get self-driving cars. They probably will stick to specific, meticulously mapped areas, but those geofences will expand over time, as fleets of vehicles share sensor data.

That progress seems exciting, not disillusioning.

The Best ADAS Product on the Market

According to Matthew DeBord from Business Insider, it’s Cadillac Super Cruise, which he rates as better than Tesla Autopilot or Mercedes-Benz Drive Pilot.

Super Cruise was superb, in my limited time with the tech, and when it was willing to operate. It’s a hyper-conservative approach to Level 2 autonomy — the level at which the driver must monitor the system, but can consider taking his or her hands off the wheel while being prepared to resume control when prompted.

Read the whole thing.

The “Convolutional Neural Networks” Lesson

The 8th lesson of the Udacity Self-Driving Car Engineer Nanodegree program is “Convolutional Neural Networks.” This is where students learn to apply deep learning to camera images!

Convolutional neural networks (CNNs) are a special category of deep neural networks that are specifically designed to work with images. CNNs have multiple layers, with each layer connected to the next by “convolutions.”

In practice, what this means is that we slide a patch-like “filter” over the input layer, and the filter applies weights to each artificial neuron in the input layer. The filter connects to a single artificial neuron in the output layer, thereby connecting each neuron in the output layer to a small set of neurons from the input layer.

To make this more concrete, consider this photograph of a dog:

When we run this photograph through a CNN, we’ll slide a filter over the image:

This filter will, broadly speaking, identify basic “features.” It might identify one frame as a curve, and another as a hole:

Curve
Hole

The next layer in the CNN would pass a different filter over a stack of these basic features, and identify more sophisticated features, like a nose:

Nose

The final layer of the CNN is responsible for classifying these increasingly sophisticated features as a dog.

This is of course simplified for the sake of explanation, but hopefully it helps to make the process clear.

One of the more vexing aspects of deep learning is that the actual “features” that a network identifies are not necessarily anything humans would think of as a “curve” or a “nose.” The network learns whatever it needs to learn in order to identify the dog most effectively, but that may not be anything humans can really describe well. Nonetheless, this description gets at the broad scope of how a CNN works.

Once students learn about CNNs generally, it’s time to practice building and training them with TensorFlow. As Udacity founder Sebastian Thrun says, “You don’t lose weight by watching other people exercise.” You have to write the code yourself!

The back half of the lesson covers some deep learning topics applicable to CNNs, like dropout and regularization.

The lesson ends with a lab in which students build and train LeNet, the famous network by Yann LeCun, to identify characters. This is a classic exercise for learning convolutional neural networks, and great way to learn the fundamentals.

Ready to start learning how to build self-driving cars yourself? Great! If you have some experience already, you can apply to our Self-Driving Car Engineer Nanodegree program here, and if you’re just getting started, then we encourage you to enroll in our Intro to Self-Driving Cars Nanodegree program here!

~

Thanks to my former colleague, Dhruv Parthasarathy, who built out this intuitive explanation in even greater detail as part of this lesson!

We’re also grateful to Vincent Vanhoucke, Principal Scientist at Google Brain, who teaches the free Udacity Deep Learning course, from which we drew for this lesson.