Last night Udacity announced a Deep Learning Nanodegree Foundation Program, in partnership with Siraj Rival, who has been teaching deep learning on YouTube to a huge audience.
We’re really excited about this program, which is a little bit of an experiment for Udacity.
If you’re interested in joining the Udacity Self-Driving Car program, you might want to consider the new Deep Learning Nanodegree Foundation Program as a warm-up.
Any student who completes the Deep Learning program is guaranteed admission into the Self-Driving Car program, along with a $100 credit.
A few days ago George Sung, who is in the first cohort of students of the Udacity Self-Driving Car Nanodegree Program, gave a presentation about his experience in the program to the Boston Self-Driving Cars Meetup.
It’s a thorough overview of the program so far. If you’re interested in signing up for the Nanodegree program, or if you’re already a student and interested in how another student has experienced it, it’s worth a watch.
George’s presentation starts at about 18:40 on the video.
The team at Udacity — the whole company, really — has been hard at work on preparing the modules on Introduction, Deep Learning, and Computer Vision. We are eager to hear what students think!
If you are both a reader of this blog and a student in the course, leave a comment and let me know!
We are excited about what we have built, but we also know it needs improvement to get it to where we would like it to be. So we’ll stay on it, and look out for improvements.
A huge focus for our program is helping students secure jobs working on autonomous vehicles. In addition to the instructional content and projects, we are spending a lot of time building up the part of the program.
And if you haven’t applied to join the program yet, please do!
We are working hard to make this the world’s best training program for self-driving car engineers. The entire curriculum will consist of three terms over nine months. Here’s what in the program:
Term 1
Introduction
Meet the instructors — Sebastian Thrun, Ryan Keenan, and myself. Learn about the systems that comprise a self-driving car, and the structure of the program.
Project: Detect Lane Lines Detect highway lane lines from a video stream. Use OpenCV image analysis techniques to identify lines, including Hough transforms and Canny edge detection.
Deep Learning
Machine Learning: Review fundamentals of machine learning, including regression and classification.
Neural Networks: Learn about perceptrons, activation functions, and basic neural networks. Implement your own neural network in Python.
Logistic Classifier: Study how to train a logistic classifier, using machine learning. Implement a logistic classifier in TensorFlow.
Optimization: Investigate techniques for optimizing classifier performance, including validation and test sets, gradient descent, momentum, and learning rates.
Rectified Linear Units: Evaluate activation functions and how they affect performance.
Regularization: Learn techniques, including dropout, to avoid overfitting a network to the training data.
Convolutional Neural Networks: Study the building blocks of convolutional neural networks, including filters, stride, and pooling.
Project: Traffic Sign Classification Implement and train a convolutional neural network to classify traffic signs. Use validation sets, pooling, and dropout to choose a network architecture and improve performance.
Keras: Build a multi-layer convolutional network in Keras. Compare the simplicity of Keras to the flexibility of TensorFlow.
Transfer Learning: Finetune pre-trained networks to solve your own problems. Study cannonical networks such as AlexNet, VGG, GoogLeNet, and ResNet.
Project: Behavioral Cloning Architect and train a deep neural network to drive a car in a simulator. Collect your own training data and use it to clone your own driving behavior on a test track.
Computer Vision
Cameras: Learn the physics of cameras, and how to calibrate, undistort, and transform image perspectives.
Lane Finding: Study advanced techniques for lane detection with curved roads, adverse weather, and varied lighting.
Project: Advanced Lane Detection Detect lane lines in a variety of conditions, including changing road surfaces, curved roads, and variable lighting. Use OpenCV to implement camera calibration and transforms, as well as filters, polynomial fits, and splines.
Support Vector Machines: Implement support vector machines and apply them to image classification.
Decision Trees: Implement decision trees and apply them to image classification.
Histogram of Oriented Gradients: Implement histogram of oriented gradients and apply it to image classification.
Deep Neural Networks: Compare the classification performance of support vector machines, decision trees, histogram of oriented gradients, and deep neural networks.
Vehicle Tracking: Review how to apply image classification techniques to vehicle tracking, along with basic filters to integrate vehicle position over time.
Project: Vehicle Tracking Track vehicles in camera images using image classifiers such as SVMs, decision trees, HOG, and DNNs. Apply filters to fuse position data.
Term 2
Sensor Fusion
Our terms are broken out into modules, which are in turn comprised of a series of focused lessons. This Sensor Fusion module is built with our partners at Mercedes-Benz. The team at Mercedes-Benz is amazing. They are world-class automotive engineers applying autonomous vehicle techniques to some of the finest vehicles in the world. They are also Udacity hiring partners, which means the curriculum we’re developing together is expressly designed to nurture and advance the kind of talent they would like to hire!
Lidar Point Cloud
Below please find descriptions of each of the lessons that together comprise our Sensor Fusion module:
Sensors The first lesson of the Sensor Fusion Module covers the physics of two of the most import sensors on an autonomous vehicle — radar and lidar.
Kalman Filters Kalman filters are the key mathematical tool for fusing together data. Implement these filters in Python to combine measurements from a single sensor over time.
C++ Primer Review the key C++ concepts for implementing the Term 2 projects.
Project: Extended Kalman Filters in C++ Extended Kalman filters are used by autonomous vehicle engineers to combine measurements from multiple sensors into a non-linear model. Building an EKF is an impressive skill to show an employer.
Unscented Kalman Filter The Unscented Kalman filter is a mathematically-sophisticated approach for combining sensor data. The UKF performs better than the EKF in many situations. This is the type of project sensor fusion engineers have to build for real self-driving cars.
Project: Pedestrian Tracking Fuse noisy lidar and radar data together to track a pedestrian.
Localization
This module is also built with our partners at Mercedes-Benz, who employ cutting-edge localization techniques in their own autonomous vehicles. Together we show students how to implement and use foundational algorithms that every localization engineer needs to know.
Particle Filter
Here are the lessons in our Localization module:
Motion Study how motion and probability affect your belief about where you are in the world.
Markov Localization Use a Bayesian filter to localize the vehicle in a simplified environment.
Egomotion Learn basic models for vehicle movements, including the bicycle model. Estimate the position of the car over time given different sensor data.
Particle Filter Use a probabilistic sampling technique known as a particle filter to localize the vehicle in a complex environment.
High-Performance Particle Filter Implement a particle filter in C++.
Project: Kidnapped Vehicle Implement a particle filter to take real-world data and localize a lost vehicle.
Control
This module is built with our partners at Uber Advanced Technologies Group. Uber is one of the fastest-moving companies in the autonomous vehicle space. They are already testing their self-driving cars in multiple locations in the US, and they’re excited to introduce students to the core control algorithms that autonomous vehicles use. Uber ATG is also a Udacity hiring partner, so pay attention to their lessons if you want to work there!
Here are the lessons:
Control Learn how control systems actuate a vehicle to move it on a path.
PID Control Implement the classic closed-loop controller — a proportional-integral-derivative control system.
Linear Quadratic Regulator Implement a more sophisticated control algorithm for stabilizing the vehicle in a noisy environment.
Project: Lane-Keeping Implement a controller to keep a simulated vehicle in its lane. For an extra challenge, use computer vision techniques to identify the lane lines and estimate the cross-track error.
Term 3
Path Planning
Elective
Systems
Term 2 and Term 3 are under construction and we’ll share more details on those as we finalize the curriculum and projects.
All of this, including Term 1, is subject to change as we update the curriculum over time, because part of building a great course is taking feedback and making improvements!
If you’ve been accepted into the course, congratulations! We are excited to teach you.
If we suggested you brush up on a few topics and take a self-assessment before joining the course, please do! We are excited to teach you and want to make sure you have a great experience.
And if you haven’t yet applied, please do! We are taking applications for the 2017 cohorts and would love to have you in the class.
We are so excited to have over 10,000 students apply to join the program, and we hope to teach all of them.
We’re limiting the initial cohort to 500 students to make sure we have everything ready to go to scale up the program over time, but the goal is to be able to teach everyone who wants to learn.
Here is a tentative (subject to change) overview of the first term:
Introduction: You’ll learn about the program, the student support available, and, most importantly, the ways we’ll help you land a job in autonomous vehicles. Within hours of starting, you’ll be writing code to find lane lines on the road.
Deep Learning: You’ll learn about deep neural networks and deep learning frameworks. In the final project you’ll build a deep neural network for end-to-end driving of a vehicle in a simulator.
Computer Vision: You’ll learn about how computers and cameras work together to see the world. In the final project you’ll use OpenCV and deep learning to identify vehicles on the highway.
I am super-excited about this program and I hope you are, too. Please join us!
At 4:30pm PDT today (Tuesday, September 20) we’ll be hosting an online open house for the Udacity Self-Driving Car Engineer Nanodegree program.
We’re collecting questions from students and our Director of Learning, Dhruv Parthasarathy, will be asking me to answer them live. Hopefully I won’t trip up on live Internet.
If you have any questions you’d especially like us to answer, please leave them in the comments here.
I was at TechCrunch Disrupt yesterday, where Udacity’s founder and chairman, Sebastian Thrun, announced the opening of applications for our Self-Driving Car Nanodegree program.
The biggest company-wide emphasis at Udacity since I joined has been “Only at Udacity”, a focus on launching programs and courses and experiences that only Udacity provides.
A self-driving car program that is available to students everywhere in the world is, on its own, an Only at Udacity program.
Helping students around the world take their code and put it on an actual car takes us to something even beyond that.
So come build a crowd-sourced self-driving car with us. Sign up here!
Udacity will have a big presence at TechCrunch Disrupt in San Francisco this coming week, with a big emphasis on our Self-Driving Car Nanodegree program.
I’ll be at the conference all day on Tuesday, so please come say hello if you’re there. I’ll be standing next to the car wrapped in the Udacity logo, with lidars and radars on it 🙂
Also, Sebastian Thrun goes on-stage on Tuesday at 9:25am to speak about self-driving cars and online education, and he’s more interesting than me, so be sure to catch that.
A month and a half ago I joined Udacity to build the Self-Driving Car Engineer Nanodegree program. Since then, we’ve been hard at work, and we’ll launch by the end of the year.
In the meantime, we quietly floated an announcement and asked about interest, just to make sure there would be somebody there to take the course when we built it.
True story: right before we launched Dhruv told me that we might cancel the program if we didn’t get 500 interested students, and I kind of freaked out a little.
Fortunately, we did a lot better than 500.
20,000 students from all over the globe have signed up to learn more about the program— Ushuaia to Hawaii to Tokyo to Tehran to Cabo Verde.
We even have somebody from my birthplace — Juneau, Alaska. If you’re reading this, hello, Juneau!
We are so excited to work with students from around the world.
I am particularly excited to see what students do with the tools we provide. I’m guessing there will be things to come out of this that we couldn’t have dreamed up.
So, thank you for signing up.
If you haven’t signed up yet, please sign up here! It helps keep me employed.
We’ll have the Self-Driving Car Engineer Nanodegree program out to you soon 🙂
One of the first questions we got was, “Are there enough jobs to make this worthwhile? It’s just Google and Tesla, right?”
There are so many jobs!
Transportation-as-a-Service
Uber is building out their own autonomous vehicle division in Pittsburgh.
GM just announced that Lyft will be running GM’s first production run of self-driving cars.
Tech Companies
Google is the most famous tech company working on self-driving cars, but Baidu is working on this as well. Lots of rumors indicate Apple might be working on this.
OEMs
Every OEM has a team, or multiple teams, dedicated to self-driving cars. Tesla, Ford, GM, Toyota, Mercedes, BMW, Audi, Toyota, Mazda, Subaru, Kia, Volvo, and the list goes on.
Startups
Startups like Otto, Comma.ai, and Zoox are hiring as fast as they can.
Tier 1 Suppliers
Companies like Delphi, Bosch, and Continental are known as Tier 1 suppliers. They sell automotive-grade hardware in bulk to OEMs, and they badly want to win these contracts.
Tier 2 Suppliers
Tier 2 suppliers span a range of specialties, and typically sell their components to Tier 1 suppliers, who in turn package it up for OEMs.
This is a huge category!
In computer vision there is Mobileye, in mapping there is HERE, in processors there is NVIDIA and Intel, in lidar there is Velodyne. So many suppliers!
This is just a sample list of who is hiring. There are a lot more companies, and right now demand for talent is far outstripping supply. It is a great time to go to work on self-driving cars!