Visiting Udacity’s Self-Driving Car Training Center in China

A pioneering training center, developed jointly with Tier IV and PIX, provides an opportunity for students to work in teams on their own self-driving car.

Last week I had the privilege of visiting Udacity’s Self-Driving Car Training Center in Guiyang, China.

This is a facility that our colleagues on Udacity’s China team have developed jointly with Tier IV (the creators of Autoware) and PIX (an autonomous vehicle startup in Guiyang). The center provides an opportunity for students from all over China to come together and work in teams for a week on their own self-driving car. Over the course of the program, they install all of their own software and get the car driving itself around a test track, stopping at traffic lights and stop lines.

This is such an amazing program!

Bringing together teams of students to work in-person on a self-driving car is a tremendous experience that was exceptionally valuable for the students at the Training Center.

Watching and participating with students working on self-driving cars in western China, I was reminded of how huge the talent pool is, all over the world, for people who want to build autonomous vehicles.

Check out this video of one of the team’s getting their car to drive!

As we continue to move forward into the future of autonomous transportation, opportunities to experience firsthand the migration from theory to practice—and from online to on the street—will become ever more valuable to then engineers engaged in making this future a reality. I am excited that Udacity’s China team is helping to make this kind of experience possible for aspiring autonomous engineers, and I am grateful to have had the opportunity to watch it all in action.

Are you interested in joining the next generation of talent building the future of autonomous transportation? Visit our School of Autonomous Systems today!

KPIT Sponsors 500 Self-Driving Car Scholarships For India

Aspiring Self-Driving Car Engineers in India, apply today for the opportunity to work on autonomous vehicles, regardless of your financial situation!

KPIT, one of India’s leading automotive software suppliers, just announced they are sponsoring 500 scholarships for Indian students to take Udacity’s Self-Driving Car Engineer Nanodegree program!

Apply now for the scholarship!

Since we launched the Nanodegree program two years ago, we have seen tremendous interest from students in India who want to learn about autonomous vehicles. Many of the Indian students who have enrolled in the Nanodegree program now work on autonomous vehicles at great Indian companies like KPIT, Ola, and Hi-Tech Robotics.

The KPIT Scholarships will provide the opportunity for any student in India to work on self-driving cars, regardless of their financial situation.

KPIT is making a tremendous investment in Indian software engineers. We are delighted to be able to work with Kishor Patil and the KPIT team to make this possible!

Experience Self-Driving Trucks in India with Flux Auto

See the prototype in action—10km of autonomous highway driving!

One of the great joys of teaching self-driving cars at Udacity is watching the amazing things our students build. I’m based in Silicon Valley, but our students come from all over the world, and I have the opportunity on a daily basis to experience the truly global nature of this transportation revolution.

Case in point: Flux Auto.

One of our Udacity self-driving car graduates, Shilpaj Bhalerao, is the tech lead for a small team building self-driving trucks in India. This is no small feat. India is even more of a challenge for self-driving cars than other parts of the world, due to the complexity of traffic norms, and the infrastructure quality. So it’s super-exciting to watch their 10km autonomous trucking run in action.

Check it out.

No matter where you are in the world, if your dream is to contribute to the future of autonomous transportation, then you can acquire the skills you need with Udacity. Check out our School of Autonomous Systems today.

Join Me On Thursday For Our Online Open House For Udacity’s School Of Autonomous Systems!

This Thursday, August 23rd, at 9am Pacific Time, I will be hosting an online open house for Udacity’s School of Autonomous Systems. RSVP now to join me, my Udacity colleagues, our alumni, and other potential students, and learn about our many exciting programs!

The School of Autonomous Systems is the home for Udacity’s Self-Driving Car, Flying Car, Robotics, and Intro to Self-Driving Cars Nanodegree Programs.

At the Open House, I’ll share an overview of each program, compare them, and describe the careers for which each option prepares you.

We’ll finish with a live question-and-answer session, co-hosted by myself and several of my Udacity instructional colleagues, to answer as many of your questions as we can.

RSVP now to join us on Thursday! And don’t worry if you can’t make it! RSVP via the link, and we’ll send you the recording of the open house so you don’t miss out!

Self-Driving Fundamentals: Lesson 1

Discover why our free “Self-Driving Fundamentals: Featuring Apollo” course is such an ideal starting point for anyone new to the field of autonomous systems.

Self-Driving Fundamentals: Featuring Apollo is a terrific, free introduction to how self-driving cars work, through the lens of the Apollo open-source self-driving car project.

It’s perfect for beginners who are in the exploratory phase of their autonomous systems journey, and it does a great job demonstrating how existing basic programming and data skills can be applied to this field.

Upon successfully completing the program, you’ll be ready to combine your newly-acquired self-driving car fundamentals with your existing programming skills to enroll in our Intro to Self-Driving Cars Nanodegree program.

The course begins with an overview of self-driving cars, and how they work. From there, we dive into Apollo and it’s multi-layer architecture: reference vehicle, reference hardware, open-source software, and cloud services.

Self-Driving Fundamentals: Featuring Apollo is led by the leaders of the Apollo project, Udacity founder Sebastian Thrun, and me!

This is a great opportunity to learn the key parts of self-driving cars, and get to know the Apollo architecture. You’ll utilize Apollo HD maps, localization, perception, prediction, planning and control, and start learning the fundamentals of how to build self-driving cars.

And it’s free! You should learn about self-driving cars with us 🚗

Intro to Self-Driving Cars Webinar

Get the inside scoop on how self-driving cars work, and learn about career opportunities in the autonomous vehicle industry!

A few days ago I hosted a webinar with my colleague Jane Sheppard about how self-driving cars work, and about career opportunities in the autonomous vehicle industry. If you’re interested in self-driving cars, you should watch 😉

I talk about business models in the self-driving car space, run down who’s focused on hardware vs. software, and describe all the players in the arena, from the smallest startups to the biggest global organizations (you wouldn’t believe how much money some of the big companies are putting into this technology!). I specifically discuss Tesla’s consumer sales model, and the impact and involvement of ride-sharing companies—and that’s all before I even get to the core technologies that make self-driving cars possible 🚗

If you want to get the low-down on perception, localization, and planning, just hit play below!

“If sensor fusion and computer vision tell you what the world looks like, localization tells you where you are in that world.”

And if you would like to learn about self-driving cars, I recommend the Udacity Intro to Self-Driving Cars Nanodegree Program (for new programmers) or the Self-Driving Car Engineer Nanodegree Program (for intermediate programmers).

See you in the classroom!

Perception Projects from the Self-Driving Car Nanodegree Program

In these projects, students showcase initiative, creativity, and work ethic, as they build projects focused on topics like perception, deep learning, and computer vision.

Students always tell us that perception, deep learning, and computer vision are some of their favorite topics in the Udacity Self-Driving Car Engineer Nanodegree Program.

Their intense interest in these topics translates directly to the high quality of their work. Today, I’d like to share three especially impressive student projects with you that cover these areas!

Implementing YOLO using ResNet as Feature extractor

Mohammad Atif Khan

I love this project! Mohammad did it on his own, and he went way beyond the requirements of the Nanodegree program to do so. That’s going to serve him well in the long run, because employers love it when talented students get so deep into particular subjects, that they start building their own projects to further flesh out their ideas and test their skills.

“In this project I have used a pre-trained ResNet50 network, removed its classifier layers so it becomes a feature extractor, and then added the YOLO classifier layer instead (randomly initialized). I then trained the network on Udacity’s CrowdAI dataset to detect cars in video frames.”

Semantic Segmentation

Enrique Garcia

Enrique used VGG-16 to create a semantic segmentation network for the Advanced Deep Learning project in the Nanodegree program. He trained that network using the KITTI dataset, and then applied the network to scenes he recorded driving in Mexico. Check out his YouTube videos!

“The original FCN-8s was trained in stages. The authors later uploaded a version that was trained all at once to their GitHub repo. The version in the GitHub repo has one important difference: The outputs of pooling layers 3 and 4 are scaled before they are fed into the 1×1 convolutions. As a result, some students have found that the model learns much better with the scaling layers included. The model may not converge substantially faster, but may reach a higher IoU and accuracy.”

Machine Learning for Vehicle Detection

Moataz Elmasry

Moataz built a vehicle detection pipeline combining histogram of oriented gradients, support vector machines, and sliding window search. I particularly like the heatmap he employs to reduce false positives in vehicle detection. This is a great example of going beyond the steps outlined in the Nanodegree program, to build a truly standout project.

“Now given the simplicity of the SVM model, we expect some detections to be false positives. In order to filter out these incorrect detections, one approach is to threshold our positive windows such that we only pick areas where more than one window overlap. In essence we are generating a heatmap of the positive windows.”


Udacity practices project-based learning, which means all of our students in all of our Nanodegree programs build projects like these. This approach enables you to learn practical skills, and to build a dynamic portfolio populated with completed projects that clearly showcase your new skills and experience.

If you’re interested in building amazing projects around perception, deep learning, or any other project related to self-driving cars, you should join our Intro to Self-Driving Cars Nanodegree Program, or our Self-Driving Car Engineer Nanodegree Program!

See you in the classroom, and I can’t wait to see what you build!

Udacity Grads in Self-Driving Car Jobs and Beyond

Since we launched our Self-Driving Car Engineer Nanodegree program in late 2016, nearly 2,000 students have completed the program, and more new graduates are joining them every month.

Not all students enroll in the program specifically to find a new job, but many do, and it’s exciting to see a new generation of talent entering this field. With that in mind, we’d like to introduce you to some of these alums!

One of our early graduates, Robert Ioffe, transitioned within his company, Intel, to the role of Senior Self-Driving Car Software Engineer shortly after enrolling in our Nanodegree Program. Since then, Intel has announced its plan to begin testing 100 self-driving cars in Jerusalem, and eventually in the U.S.

“The coolest thing is that everything I learn in the class is immediately applicable to my current job, which is building a self-driving Range Rover with Intel technology inside. It is very rare where you can learn things in class one day, and the next day you can apply it in your work!”

Megha Maheshwari, who immigrated to the U.S. from India, began her career as a software engineer, but ultimately landed a position at Volvo Cars as an Autonomous Driving, Deep Learning, and Computer Vision Engineer. Volvo is investing heavily in fully electric and self-driving vehicles, which many industry analysts believe is part of its plan to go public.

“When I was ending Term One, I felt I had enough knowledge about classical computer vision and deep learning. That’s when I started looking for jobs and not long after looking and applying, I got hired.”

Udacity grads are also launching startups to capitalize on the growing opportunities in the autonomous vehicle market. After earning their Nanodegree credentials, alums David Hayes and Duncan Iglesias formed the Autonomous Vehicle Organization, or AVO, to increase safety and security by focusing on the vehicle-to-pedestrian (V2P) segment.

Using skills developed in the course, David and Duncan built a semi-autonomous Honda Civic in just 11 days!

Fellow alum Patrick Kern also co-founded a startup, with a different focus. Brighter AI was launched to develop “Deep Natural Anonymization” to help companies comply with new General Data Protection Regulations (GDPR) in Europe.

And just last month, we profiled Han Bin Lee, who teamed up with fellow students he met while working on the Udacity-Didi challenge to start Seoul Robotics. Expanding on the Lidar perception software, they began building during the challenge, and today, the company is already looking for a 3D Vision Researcher and a C++ Software Developer to join their crew in South Korea!

“We’re looking for people with the willingness and ability to learn new concepts and algorithms from the latest research. And we’re a startup, so we need team players, who are able to work effectively within a fast-growing, diverse group of people — we have Korean, Vietnamese, American, and Irish people working with us so far. We are really short of people right now, and we would love to work with fellow Udacity alumni to build this company together!”

Because this is still an emerging field, many of our early students didn’t start out wanting to become Self-Driving Car Engineers. That’s changing rapidly as people are realizing how much opportunity there is in this space, but for lifelong learners like Kyle Martin, the decision to enter this field was the culmination of a really interesting journey:

“I started looking for an industry role while I was still in the program. A lead robotics engineer role appeared with a company that was beginning to work on an autonomous shuttle. I jumped at it and got an interview! They were interested in all the areas I’d been working on — things like computer vision and systems architecture. And they were really impressed I’d kept learning and adding to my skills in the program. When they made me an offer, I said “yes” immediately — it sounded like I’d have the opportunity to work on really groundbreaking projects.”

These are just a few examples of how alumni of our autonomous transportation programs are having an impact on this incredible field. As our alumni network continues to grow, we’re excited to help more students find positions in the industry. Stay tuned!

The New Udacity Self-Driving Car Engineer Nanodegree Program Syllabus

A focus on fundamental skills in each core area of the self-driving car stack.

Over 12,000 students have enrolled in Udacity’s Self-Driving Car Engineer Nanodegree Program, and many of them are now working in the autonomous vehicle industry.

These successes have taught us a great deal about what you need to know in order to accomplish your goals, and to advance your career. In particular, we’ve learned that by narrowing the breadth of the program, and expanding opportunities to go deep in specific areas, we can better offer a path that is expressly tailored to support your career journey.

To that end, we’re updating the curriculum for the program to focus on fundamental skills in each core area of the self-driving car stack. I’d like to share some details with you about this important update, and about the changes we’ve made.

Term 1

Introduction

  1. Welcome
    In our introduction, you’ll begin by meeting your instructors — Sebastian Thrun, Ryan Keenan, and myself. You’ll learn about the systems that comprise a self-driving car, and the structure of the program as a whole.
  2. Workspaces
    Udacity’s new in-browser programming editor moves you straight to programming, and past any challenges related to installing and configuring dependencies.

Computer Vision

  1. Computer Vision Fundamentals
    Here, you’ll use OpenCV image analysis techniques to identify lines, including Hough transforms and Canny edge detection.
  2. Project: Detect Lane Lines
    This is really exciting—you’ll detect highway lane lines from a video stream in your very first week in the program!
  3. Advanced Computer Vision
    This is where you’ll explore the physics of cameras, and learn how to calibrate, undistort, and transform images. You’ll study advanced techniques for lane detection with curved roads, adverse weather, and varied lighting.
  4. Project: Advanced Lane Detection

EIn this project, you’ll detect lane lines in a variety of conditions, including changing road surfaces, curved roads, and variable lighting. You’ll use OpenCV to implement camera calibration and image transforms, as well as apply filters, polynomial fits, and splines.

Deep Learning

  1. Neural Networks
    Here, you’ll survey the basics of neural networks, including regression, classification, perceptrons, and backpropagation.
  2. TensorFlow
    Next up, you’ll train a logistic classifier using TensorFlow. And, you’ll implement related techniques, such as softmax probabilities and regularization.
  3. Deep Neural Networks
    This is where you’ll combine activation functions, backpropagation, and regularization, all using TensorFlow.
  4. Convolutional Neural Networks
    Next, you’ll study the building blocks of convolutional neural networks, which are especially well-suited to extracting data from camera images. In particular, you’ll learn about filters, stride, and pooling.
  5. Project: Traffic Sign Classifier

For this project, you’ll implement and train a convolutional neural network to classify traffic signs. You’ll use validation sets, pooling, and dropout to design a network architecture and improve performance.

  1. Keras
    This will be your opportunity to build a multi-layer convolutional network in Keras. And, you’ll compare the simplicity of Keras to the flexibility of TensorFlow.
  2. Transfer Learning
    Here, you’ll fine tune pre-trained networks to apply them to your own problems. You’ll study cannonical networks such as AlexNet, VGG, GoogLeNet, and ResNet.
  3. Project: Behavioral Cloning

For this project, you’ll architect and train a deep neural network to drive a car in a simulator. You’ll collect your own training data, and use it to clone your own driving behavior on a test track.

Career Development

  1. GitHub
    For this career-focused project, you’ll get support and guidance on how to polish your portfolio of GitHub repositories. Hiring managers and recruiters will often explore your GitHub portfolio before an interview. So it’s important to create a professional appearance, make it easy to navigate, and ensure it showcases the full measure of your skills and experience.

Sensor Fusion

Our terms are broken out into modules, which are in turn comprised of a series of focused lessons. This Sensor Fusion module is built with our partners at Mercedes-Benz. The team at Mercedes-Benz is amazing. They are world-class automotive engineers applying autonomous vehicle techniques to some of the finest vehicles in the world. They are also Udacity hiring partners, which means the curriculum we’ve developed is expressly designed to nurture and advance the kind of talent they’re eager to hire!

  1. Sensors
    The first lesson of the Sensor Fusion Module covers the physics of two of the most import sensors on an autonomous vehicle — radar and lidar.
  2. Kalman Filters
    Kalman filters are a key mathematical tool for fusing together data. You’ll implement these filters in Python to combine measurements from a single sensor over time.
  3. C++ Checkpoint
    This is a chance to test your knowledge of C++ to evaluate your readiness for the upcoming projects.
  4. Geometry and Trigonometry
    Before advancing further, you’ll get a refresh on your knowledge of the fundamental geometric and trigonometric functions that are necessary to model vehicular motion.
  5. Extended Kalman Filters
    Extended Kalman Filters (EKFs) are used by autonomous vehicle engineers to combine measurements from multiple sensors into a non-linear model. First, you’ll learn the physics and mathematics behind vehicular motion. Then, you’ll combine that knowledge with an extended Kalman filter to estimate the positions of other vehicles on the road.
  6. Project: Extended Kalman Filters in C++

For this project, you’ll use data from multiple sensors to track a vehicle’s motion, and estimate its location with precision. Building an EKF is an impressive skill to show an employer.

Term 2

Localization

This module is also built with our partners at Mercedes-Benz, who employ cutting-edge localization techniques in their own autonomous vehicles. Together we show students how to implement and use foundational algorithms that every localization engineer needs to know.

  1. Introduction to Localization
    In this intro, you’ll study how motion and probability affect your understanding of where you are in the world.
  2. Markov Localization
    Here, you’ll use a Bayesian filter to localize the vehicle in a simplified environment.
  3. Motion Models
    Next, you’ll learn basic models for vehicle movements, including the bicycle model. You’ll estimate the position of the car over time given different sensor data.
  4. Particle Filter
    Next, you’ll use a probabilistic sampling technique known as a particle filter to localize the vehicle in a complex environment.
  5. Implementation of a Particle Filter
    To prepare for your project, you’ll implement a particle filter in C++.
  6. Project: Kidnapped Vehicle

For your actual project, you’ll implement a particle filter to take real-world data and localize a lost vehicle.

Planning

  1. Search
    First, you’ll learn to search the environment for paths to navigate the vehicle to its goal.
  2. Prediction
    Then, you’ll estimate where other vehicles on the road will be in the future, utilizing both models and data.
  3. Behavior Planning
    Next, you’ll model your vehicles behavior choices using a finite state machine. You’ll construct a cost function to determine which state to move to next.
  4. Trajectory Generation
    Here, you’ll sample the motion space, and optimize a trajectory for the vehicle to execute its behavior.
  5. Project: Highway Driving

For your project, you’ll program a planner to navigate your vehicle through traffic on a highway. Pro tip: Make sure you adhere to the speed, acceleration, and jerk constraints!

Control

  1. Control
    You’ll begin by build control systems to actuate a vehicle to move it on a path.
  2. Project: PID Control

Then, you’ll implement the classic closed-loop controller — a proportional-integral-derivative control system.

Career Development

  1. Build Your Online Presence
    Here, you’ll continue to develop your professional brand, with the goal of making it easy for employers to understand why you are the best candidate for their job.

System Integration

  1. Autonomous Vehicle Architecture
    Get ready! It’s time to earn the system architecture of Carla, Udacity’s own self-driving car!
  2. Introduction to ROS
    Here, you’ll navigate Robot Operating System (ROS) to send and receive messages, and perform basic commands.
  3. Packages & Catkin Workspaces
    Next, you’ll create and prepare an ROS package so that you are ready to deploy code on Carla.
  4. Writing ROS Nodes
    The, you’ll develop ROS nodes to perform specific vehicle functions, like image classification or motion control.
  5. Project: Program an Autonomous Vehicle

Finally, for your last project, you’ll deploy your teams’ code to Carla, a real self-driving car, and see how well it drives around the test track!

  1. Graduation
    Congratulations! You did it!

By structuring our curriculum in this way, we’re able to offer you the opportunity to master critical skills in each core area of the self-driving car stack. You’ll establish the core foundations necessary to launch or advance your career, while simultaneously preparing yourself for more specialized and advanced study.

Ready? Let’s drive!

Comparing Udacity’s Self-Driving Car Programs

Which of Udacity’s two Self-Driving Car programs is right for you? This post will show you how to make the right choice.

Udacity has two excellent Nanodegree Programs for aspiring self-driving car engineers: the Self-Driving Car Engineer Nanodegree program, and the Intro to Self-Driving Cars Nanodegree program.

Which one is right for you?

To try and answer this question, I’ll begin with a story. In October of 2016, Udacity welcomed the first class of students into our Self-Driving Car Engineer Nanodegree program. Since that historic debut, we have been delighted to enroll over 11,000 students around the world in this program!

Along the way, we learned that while people across the globe were thrilled at the prospect of being able to work on autonomous vehicles, not all of them were equipped to do so—many of them needed additional training to get ready for the rigors and challenges of our curriculum.

In order to provide a viable point-of-entry for these eager learners, we built the Intro to Self-Driving Cars Nanodegree Program, and welcomed the first class of students at the end of 2017. This “Intro” program prepares students with the fundamentals in Python, C++, calculus, linear algebra, statistics, and physics that are necessary to become a Self-Driving Car Engineer.

Both Nanodegree programs are paths to a career in the self-driving car field, but the goals of each program are distinct, as are the skills one learns.

The Self-Driving Car Engineer (SDC) Nanodegree program is an advanced program in which students write programs in Python and C++, and learn new frameworks like ROS and TensorFlow. Students entering SDC should be able to write programs from scratch, and should be comfortable with both calculus and linear algebra. SDC does not require solving differential equations by hand, but does require that students be comfortable interpreting mathematical notation and translating it into code.

The Intro to Self-Driving Cars (iSDC) Nanodegree program is an intermediate program that requires entering students to have only minimal programming and math knowledge. Students entering iSDC should be comfortable reading and modifying code in at least one language (Python helps, since that is first language the program uses). Entering students should also be comfortable with high-school algebra. From there, iSDC teaches the trigonometry, calculus, linear algebra, statistics, and physics that are necessary to succeed in the advanced SDC program.

iSDC does not require an application to enroll, and everybody is welcome. However, students with no programming experience at all might consider starting their journey with Udacity’s Intro to Programming Nanodegree program, and then proceeding on to Intro to Self-Driving Cars. A slightly more mathematical (and more challenging) alternative first step would be Udacity’s Data Analyst Nanodegree Program.

Whether you are ready for the Self-Driving Car Engineer Nanodegree program today or feel like you should cover the topics in Intro to Self-Driving Cars first, Udacity is the place to start on the road to becoming a Self-Driving Car Engineer. See you in the classroom!