Self-Driving Cars the World Over

As self-driving cars move closer and closer to reality, we’re seeing more and more places in the world that people are working on them.

Some of these efforts are big. Some are small but growing. More will come.

It’s an exciting time to be in the business.

Level 3: The Audi A8

Audi has announced Level 3 autonomous driving functionality in the upcoming 2018 A8 model. This would make Audi the first car manufacturer ever to release a Level 3 vehicle.

As a brief recap, the Society of Automotive Engineers publishes five autonomy levels.

Level 1 — Driver Assistance: The driving mode-specific execution by a driver assistance system of either steering or acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task

Level 2 — Partial Automation: The driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task

Level 3 — Conditional Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene

Level 4 — High Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene

Level 5 — Full Automation: The full-time performance by an Automated Driving System of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver

The controversial phrase in the Level 3 definition is:

“with the expectation that the human driver will respond appropriately to a request to intervene”

Some companies — most notably Google and Ford — contend that it’s not realistic to tell human drivers that they can divert their attention and then expect them to intervene quickly enough to avert an accident.

Audi seems more confident about human drivers, although they are rolling their system out slowly, presumably in an effort to better test and verify the car and the drivers.

Full Level 3 autonomous driving will be limited to divided highway scenarios at under 60 kmh (~35 mph). Basically, traffic jam driving. Which is the worst type of driving, so I look forward to the day when the computer takes that over in my own car.

The 2018 Audi A8 isn’t actually on the market yet, although it should be soon, and it’s price will start at 90,600 euro (~US$103,000). Definitely a luxury vehicle, and an exciting one.

SynCity Simulator

My colleague Aaron pointed me toward a YouTube video for what looks like a pretty awesome photorealisic simulator called SynCity. It’s built by the artificial intelligence company CVEDIA, in Holland.

The photorealism of the simulator takes it to a whole new level beyond comparable simulators I’ve seen previously. At least what’s shown in the YouTube video 😉

This is the dream of autonomous vehicle simulators — that we’ll be able to take data derived from the simulator and transfer it to the real world. Particularly for computer vision, the closer the simulator looks to reality, the more likely that is.

CVEDIA appears to be previewing the simulator now, and I’m not sure when it will hit production release. Keep an eye out.

NDT Matching

In the final project of the Udacity Self-Driving Car Nanodegree Program students build code to drive Udacity’s very own self-driving car.

As with almost any type of computer programming, however, we’re not starting from scratch. There are existing operating systems and middleware and libraries that students will get to build on to drive the car.

One of these libraries is Autoware, which is an open-source self-driving car library maintained by Tier IV. We use Autoware particularly for its localization functions, which use our lidar data and a high-definition lidar map to figure out where our vehicle is in the world.

The specific localization algorithm that Autoware uses is called normal distributions transform (NDT) matching, which was originally developed by Peter Biber at the University of Tubingen. NDT is a little different than the particle filter localization we’ve worked with previously, so I’ve spent time over the last few days reviewing how it works.

Localization

In order to figure out where we are in the world, we’ll probably use a map. There’s a whole branch of localization called simultaneous localization and mapping (SLAM), where we figure out how to navigate without a map, but that’s difficult. It’s easier just to have a map and so we’ll assume we have one.

This is a lidar point cloud map of the Udacity parking lot. Tilted on an angle.

In order to figure out where we are in the world, we take our own lidar scan and compare what we see to this map. You can basically imagine that we line up points and try to figure out, given what our current laser scan shows, where are we in this map?

One problem: our points will probably be a little off from the map. Measurement errors will cause points to be slightly mis-aligned, plus the world might change a little between when we record the map and when we make our new scan.

NDT matching provides a solution for these minor errors. Instead of trying to match points from our current scan to point on the map, we try to match points from our current scan to a grid of probability functions created from the map.

A probability density function.

We break the point cloud map into three-dimensional boxes essentially assign a probability distribution to each box. The image above is actually a 2D probability function, but we can make a 3D function following the same principles.

This way, if we detect a point a few millimeters away from where the map thinks a point should be, instead of being completely unable to match those two points, our NDT matching function connects our detected point to the probability function on the map. There’s a kind of “near match”.

For anybody who’s taken Udacity’s lessons on particle filters, or studied them elsewhere, there is a whole separate issue of monte carlo randomization that particle filters use. It seems like that could be applied to NDT matching in pretty much the same fashion, and indeed there is a paper called “Normal distributions transform Monte-Carlo localization (NDT-MCL)” by Saarinen, et al. that seems to work out the details, although I haven’t gone through that in detail.

Five Different Udacity Student Controllers

The sixth month of the Udacity Self-Driving Car Engineer Nanodegree Program teaches students about control. Control is how we actually turn the steering wheel or press the pedals to get the car to follow a trajectory, and the algorithms that perform this work are called “controllers”.

Two of the most common controllers for automotive applications are proportional-integral-derivative (PID) controllers and model predictive controllers (MPC). These are the two controllers we teach in the Udacity program.

Here are five different approaches Udacity students have taken to build controllers that drive the Udacity self-driving car around our simulator!

PID controller, self driving car

Andrey Glushko

Andrey’s YouTube video simply mentions that his PID controller automatically learns hyperparameters. It looks like Andrey ran his car around the track multiple times and used some version of coordinate descent (the formal name for Sebastian’s Twiddle algorithm) to automatically tune the parameters.

“Implemented PID controller with automatically learned hyperparameters in C++ which allows the car learns to drive itself from scratch in the simulator.”

Autonomous Driving using Predictive Control Model

Anupriya Chhabra

Anupriya talks about a number of the complications she encountered in developing her model predictive controller, and how she overcame them. Many of these are similar to what autonomous vehicle engineers find when deploying controllers to actual vehicles.

“This project also factors in real world latency that can occur while applying actuator inputs. To simulate this the project’s main thread sleeps for 100ms before sending the actuations to simulator. To account for this while returning the actuations to simulator I use 2 set of actuations — the real actuations for next step and the next predicted actuation after dt which is 0.1 second(100 ms) in my case. Sending the sum of these 2 actuations makes the model proactively apply the next actuation and hence handles the 100ms latency.”

Steering Control for self-driving car

Priya Dwivedi

Priya provides a thorough walkthrough of how she built her model predictive controller. If you’re interested in a line-by-line breakdown of how MPC works, this is a great read.

“To estimate the ideal steering angle and throttle, we estimate the error of our new state from our ideal state — actual trajectory we want to follow and the velocity and orientation we want to maintain and use a Ipopt Solver to minimize this error. This helps select the steering and throttle that minimizes the error with the desired trajectory.”

Self-Driving Car Engineer Diary — 10

Andrew Wilkie

Andrew compares and contrasts PID and MPC, along with a brief review of Term 2 of the Nanodegree Program. He does a nice job of summarizing the different levels of fidelity at which you can build a controller.

“PID controller enables the car (robot) to follow some trajectory (x-axis reference line) while proportionally (how hard to correct steering), differentially (how gradually to return to the reference) and integrally (allow for wheel misalignment) applying the cross track error (CTE). It is simple to code, inexpensive to run (computationally) and takes little tuning effort to get something working. The down-side is that the car moves erratically and cannot accurately handle actuation latency (delay between command send and physical activation) .”

Udacity Self-Driving Car Nanodegree Project 10 — Model Predictive Control

Jeremy Shannon

I enjoy the musical selections Jeremy picks to underscore his project submissions, and this one is a lot of fun. The blog post also provides a nice perspective of what it’s like to complete the Udacity MPC project as a student.

“After some debugging and tuning the cost function, my car was making its way around the track. It was time to tear it all down by adding the latency — and that’s just what happened. My approach to dealing with it was twofold (not counting simply limiting the speed): the original kinematic equations depend upon the actuations from the previous time step, but with a delay of 100ms (which happened to be my time step interval) the actuations are applied another time step later, so I altered the equations to account for this.”

Self-Driving Police Cars

A company called Otsaw has signed a partnership with Dubai to deploy self-driving police robots this year.

The robots are “the size of a child’s toy car” and come equipped with a drone that can track suspects in areas where the car can’t drive.

It’s unclear how real this is, as I haven’t heard of Singapore-based Otsaw before and their website is currently down.

The technology honestly seems a little out there right now, but it might be perfectly plausible in a year or two or three. Even today, robots surveil shopping malls.

To me, the most interesting aspect to this is how self-driving cars affect the balance between citizens, criminals, and the police.

I know less about Dubai, but in the US, police often use pretextual traffic stops to investigate more serious crimes. Self-driving cars might take away the pretext for a traffic stop, however, tipping the balance of power away from the police. Self-driving police cars with self-flying drones might tip that balance of power back.

Startup Watch: Torc Robotics

I was raised in Virginia, so I have a strong interest in anything Virginia-based.

Torc Robotics is a Virginia-based startup with extensive roots in autonomous vehicle research. The team is a spin-out from Virginia Tech that placed third in the DARPA Urban Challenge in 2006. Like many Virginia-based companies, Torc has done extensive defense contract work, but with the self-driving car boom, Torc is returning to its autonomous vehicle roots.

Udacity students often ask if it’s possible to work on self-driving cars outside of Michigan, Germany, and the Bay Area. While those are the current centers of autonomous vehicle development, there are lots of companies in unexpected places around the world working on self-driving cars.

If you’d like to work on self-driving cars and live in Blacksburg, Virginia, (which sounds pretty nice to me) Torc is there for you.

A Comparison of Self-Driving Sensors

About six months ago, when we were working with Mercedes-Benz on the Sensor Fusion Module of the Udacity Self-Driving Car Nanodegree Program, I was looking online for a concise and comprehensive comparison of sensor types.

I couldn’t find what I was looking for, so I sat down and sketched out a table myself.

I came across that table recently when I was cleaning up my desk, so I threw it into Google Slides and here it is.

I’m not positive I got every cell in this table correct, as I never ran it by the Mercedes experts. So if you see something wrong here let me know.

But maybe this will be useful to somebody looking for that same comparison table that I never found. And if you know where a better version of that table is, please mention it in the comments.

Term 3: In-Depth on Udacity’s Self-Driving Car Curriculum

Update: Udacity has a new self-driving car curriculum! The post below is now out-of-date, but you can see the new syllabus here.

In just a few days, we‘re going to begin releasing Term 3 of the Udacity Self-Driving Car Engineer Nanodegree Program, and we could not be more excited! This is the final term of a nine-month Nanodegree program that covers the entire autonomous vehicle technology stack, and as such, it’s the culmination of an educational journey unlike any other in the world.

When you complete Term 3 and graduate from this program, you will emerge with an amazing portfolio of projects that will enable you to launch a career in the autonomous vehicle industry, and you will have gained experience and skills that are virtually impossible to acquire anywhere else. Some of our earliest students, like George Sung, Robert Ioffe, and Patrick Kern, have already started their careers in self-driving cars, and we’re going to help you do the same!

Term 3

This term is three months long, and features a different module each month.

The first month focuses on path planning, which is basically the brains of a self-driving car. This is how the vehicle decides where to go and how to get there.

The second month presents an opportunity to specialize with an elective; this is your chance to delve deeply into a particular topic, and emerge with a unique degree of expertise that could prove to be a key competitive differentiator when you enter the job market. We want your profile to stand out to prospective employers, and specialization is a great way to achieve this.

The final month is truly an Only At Udacity experience. In this System Integration Module, you will get to put your code on Udacity’s very own self-driving car! You’ll get to work with a team of students to test out your skills in the real world. We know firsthand from our hiring partners in the autonomous vehicle space that this one of the things they value most in Udacity candidates; the combination of software skills and real-world experience.


Month 1: Path Planning

Path planning is the brains of a self-driving car. It’s how a vehicle decides how to get where it’s going, both at the macro and micro levels. You’ll learn about three core components of path planning: environmental prediction, behavioral planning, and trajectory generation.

Best of all, this module is taught by our partners at Mercedes-Benz Research & Development North America. Their participation ensures that the module focuses specifically on material job candidates in this field need to know.

Path Planning Lesson 1: Environmental Prediction

In the Prediction Lesson, you’ll use model-based, data-driven, and hybrid approaches to predict what other vehicles around you will do next. Model-based approaches decide which of several distinct maneuvers a vehicle might be undertaking. Data-driven approaches use training data to map a vehicle’s behavior to what we’ve seen other vehicles do in the past. Hybrid approaches combine models and data to predict where other vehicles will go next. All of this is crucial for making our own decisions about how to move.

Path Planning Lesson 2: Behavior Planning

At each step in time, the path planner must choose a maneuver to perform. In the Behavior Lesson, you’ll build finite-state machines to represent all of the different possible maneuvers your vehicle could choose. Your FSMs might include accelerate, decelerate, shift left, shift right, and continue straight. You’ll then construct a cost function that assigns a cost to each maneuver, and chooses the lowest-cost option.

Path Planning Lesson 3: Trajectory Generation

Trajectory Generation is taught by Emmanuel Boidot, from Mercedes-Benz’s Vehicle Intelligence team.

In the Trajectory Lesson, you’ll use C++ and the Eigen linear algebra library to build candidate trajectories for the vehicle to follow. Some of these trajectories might be unsafe, others might simply be uncomfortable. Your cost function will guide you to the best available trajectory for the vehicle to execute.

Path Planning Project: Highway Path Planner

Using the newest release of the Udacity simulator, you’ll build your very own path planner and put it to the test on the highway. Tie together your prediction, behavior, and trajectory engines from the previous lessons to create an end-to-end path planner that drives the car in traffic!

Month 2: Electives

Term 3 will launch with two electives: Advanced Deep Learning, and Functional Safety. We’ve selected these based on feedback from our hiring partners, and we’re very excited to give students the opportunity to gain deep knowledge in these topics.

Month 2 Elective: Advanced Deep Learning

Udacity has partnered with the NVIDIA Deep Learning Institute to build an advanced course on deep learning.

This module covers semantic segmentation, and inference optimization. Both of these topics are active areas of deep learning research.

Semantic segmentation identifies free space on the road at pixel-level granularity, which improves decision-making ability. Inference optimizations accelerate the speed at which neural networks can run, which is crucial for computational-intense models like the semantic segmentation networks you’ll study in this module.

Advanced Deep Learning Lesson 1: Fully Convolutional Networks

In this lesson, you’ll build and train fully convolutional networks that output an entire image, instead of just a classification. You’ll implement three special techniques that FCNs use: 1×1 convolutions, upsampling, and skip layers, to train your own FCN models.

Advanced Deep Learning Lesson 2: Scene Understanding

In this lesson, you’ll learn the strengths and weaknesses of bounding box networks, like YOLO and Single Shot Detectors. Then you’ll go a step beyond bounding box networks and build your own semantic segmentation networks. You’ll start with canonical models like VGG and ResNet. After removing their final, fully-connected layers, you can add the three special techniques you’ve already practiced: 1×1 convolutions, upsampling, and skip layers. Your result will be an FCN that classifies each road pixel in the image!

Advanced Deep Learning Lesson 3: Inference Optimizations

One of the challenges of semantic segmentation is that it requires a lot of computational power. In this lesson, you’ll learn how to accelerate network performance in production, using techniques such as fusion, quantization, and reduced precision.

Advanced Deep Learning Project: Semantic Segmentation

In the project at the end of the Advanced Deep Learning Module, you’ll build a semantic segmentation network to identify free space on the road. You’ll apply your knowledge of fully convolutional networks and their special techniques to create a semantic segmentation model that classifies each pixel of free space on the road. You’ll accelerate the network’s performance using inference optimizations like fusion, quantization, and reduced precision. You’ll be studying and implementing approaches used by top performers in the KITTI Road Detection Competition!

Month 2 Elective: Functional Safety

Together with Elektrobit, we’ve built a fun and comprehensive Functional Safety Module.

You’ll learn functional safety frameworks to ensure that vehicles is safe, both at the system and component levels.

Functional Safety Lesson 1: Introduction

You’ll build a functional safety case with Dheeraj, Stephanie, and Benjamin from Elektrobit.

In this lesson, Elektrobit’s experts will guide you through the high-level steps that the ISO 26262 standard requires for building a functional safety case. ISO 26262 is the world-recognized standard for automotive functional safety. Understanding the requirements of this standard gets you started on mastering a crucial field of autonomous vehicle development.

Functional Safety Lesson 2: Safety Plan

In this lesson, you’ll build a safety plan for a lane-keeping assistance feature. You’ll start with the same template that Elektrobit functional safety managers use, and add the information specific to your feature.

Functional Safety Lesson 3: Hazard Analysis and Risk Assessment

You’ll complete a hazard analysis and risk assessment for the lane-keeping assistance feature. As part of the HARA, you’ll brainstorm how the system might fail, including the operational mode, environmental details, and item usage of each hypothetical scenario. Your HARA will record the issues to monitor in your functional safety analysis.

Functional Safety Lesson 4: Functional Safety Concept

For each issue identified in the HARA, you’ll develop a functional safety concept that describes high-level performance requirements.

Functional Safety Lesson 5: Technical Safety Concept

You’ll translate high-level functional safety concept requirements into technical safety concept requirements that dictate specific performance parameters. At this point you’ll have concrete constraints for the system.

Functional Safety Lesson 6: Software and Hardware

Functional safety includes specific rules on how to implement hardware and software. In this lesson, you’ll learn about spatial, temporal, and communication interference, and how to guard against them. You’ll also review MISRA C++, the most common set of rules for writing C++ for automotive systems.

Functional Safety Project: Safety Case

You’ll use the guidance from your lessons to construct an end-to-end safety case for a lane departure warning feature. You’ll begin with the hazard analysis and risk assessment, and create further documentation for functional and technical safety concepts, and finally software and hardware requirements. Analyzing and documenting system safety is critical for autonomous vehicle development. These are skills that often only experienced automotive engineers possess!

System Integration

System integration is the final module of the Nanodegree program, and it’s the month where you actually get to put your code on the Udacity Self-Driving Car!

You’ll learn about the software stack that runs on “Carla,” our self-driving vehicle. Over the course of the final month of the program, you will work in teams to integrate software components, and get the car to drive itself around the Udacity test track.

Vehicle Subsystems

This lesson walks you through Carla’s key subsystems: sensors, perception, planning, and control. Eventually you’ll need to integrate software modules with these systems so that Carla can navigate the test track.

ROS and Autoware

Carla runs on two popular open-source automotive libraries: ROS and Autoware. In this lesson you’ll practice implementing ROS nodes and Autoware modules.

System Integration

During the final lesson of the program, you’ll integrate ROS nodes and Autoware modules with Carla’s software development environment. You’ll also learn how to transfer the code to the vehicle, and resolve issues that arise on real hardware, such as latency, dropped messages, and process crashing.

Project: Carla

This is the capstone project of the Nanodegree program! You will work with a team of students to integrate the skills you’ve developed over the last nine months. The goal is to build Carla’s software environment to successfully navigate Udacity’s test track.


When you complete Term 3, you will graduate from the program, and earn your Udacity Self-Driving Car Engineer Nanodegree credential. You will be ready to work on an autonomous vehicle team developing groundbreaking self-driving technology, and you will join a rarefied community of professionals who are committed to a world made better through this transformational technology.

See you in class!