Self-Driving Path Planning, Brought to You by Udacity Students

Term 3 of the Udacity Self-Driving Car Engineer Nanodegree Program starts with path planning. This is one of the deepest and hardest problems for a self-driving car.

Here are three Udacity student approaches that show the complexity and beauty of path planning.

Reflections on Designing a Virtual Highway Path Planner (Part 1/3)

Mithi

Mithi published a three-part series about what she calls “the most difficult project yet” of the Nanodegree Program. In Part 1, she outlines the goals and constraints of the project, and decides on how to approach the solution. Part 2 covers the architecture of the solution, including the classes Mithi developed and the math for trajectory generation. Part 3 covers implementation, behavior planning, cost functions, and some extra considerations that could be added to improve the planner. This is a great series to review if you’re just starting the project.

“I decided that I should start with a simple model with many simple assumptions and work from there. If the assumption does not work then I will then make my model more complex. I should keep it simple (stupid!).

A programmer should not add functionality until deemed necessary. Always implement things when you actually need them, never when you just foresee that you need them. A famous programmer said that somewhere.

My design principle is, make everything simple if you can get away with it.”

Path Planning in Highways for an Autonomous Vehicle

Mohan Karthik

Mohan takes a different approach to path planning, in which he combines a cost function with a feasibility checklist. He builds a cost function and then ranks each lane by how it does on a cost function. Then he decides whether to move to a lane based on the feasibility checklist.

“This comes down to two things (and I’m going to be specific to highway scenario).

Estimating a score for each lane, to determine the best lane for us to be in (efficiency)

Evaluating the feasibility of moving to that lane in the immediate future (safety & comfort)”

Self-Driving Car Engineer Diary — 11

Andrew Wilkie

The 11th post in Andrew’s series on the Nanodegree Program covers Term 3 broadly and path planning specifically. In particular, Andrew lays out where this path planning project falls in the taxonomy of autonomous driving, and the high-level inputs and outputs of a path planner. This is a great post to review if you’re interested in what a path planner does.

“I found the path planning project challenging, in large part due to fact that we are implementing SAE Level 4 functionality in C++ and the complexity that comes with the interactions required between the various modules.”

These examples make clear the vision, skill, and tenacity our students are applying to even the most difficult challenges, and it’s a real pleasure to share their incredible work. It won’t be long before these talented individuals graduate the program, and begin making significant, real-world contributions to the future of self-driving cars. I know I speak for everyone at Udacity when I say that I’m very excited for the future they’re going to help build!

Term 3: In-Depth on Udacity’s Self-Driving Car Curriculum

Update: Udacity has a new self-driving car curriculum! The post below is now out-of-date, but you can see the new syllabus here.

In just a few days, we‘re going to begin releasing Term 3 of the Udacity Self-Driving Car Engineer Nanodegree Program, and we could not be more excited! This is the final term of a nine-month Nanodegree program that covers the entire autonomous vehicle technology stack, and as such, it’s the culmination of an educational journey unlike any other in the world.

When you complete Term 3 and graduate from this program, you will emerge with an amazing portfolio of projects that will enable you to launch a career in the autonomous vehicle industry, and you will have gained experience and skills that are virtually impossible to acquire anywhere else. Some of our earliest students, like George Sung, Robert Ioffe, and Patrick Kern, have already started their careers in self-driving cars, and we’re going to help you do the same!

Term 3

This term is three months long, and features a different module each month.

The first month focuses on path planning, which is basically the brains of a self-driving car. This is how the vehicle decides where to go and how to get there.

The second month presents an opportunity to specialize with an elective; this is your chance to delve deeply into a particular topic, and emerge with a unique degree of expertise that could prove to be a key competitive differentiator when you enter the job market. We want your profile to stand out to prospective employers, and specialization is a great way to achieve this.

The final month is truly an Only At Udacity experience. In this System Integration Module, you will get to put your code on Udacity’s very own self-driving car! You’ll get to work with a team of students to test out your skills in the real world. We know firsthand from our hiring partners in the autonomous vehicle space that this one of the things they value most in Udacity candidates; the combination of software skills and real-world experience.


Month 1: Path Planning

Path planning is the brains of a self-driving car. It’s how a vehicle decides how to get where it’s going, both at the macro and micro levels. You’ll learn about three core components of path planning: environmental prediction, behavioral planning, and trajectory generation.

Best of all, this module is taught by our partners at Mercedes-Benz Research & Development North America. Their participation ensures that the module focuses specifically on material job candidates in this field need to know.

Path Planning Lesson 1: Environmental Prediction

In the Prediction Lesson, you’ll use model-based, data-driven, and hybrid approaches to predict what other vehicles around you will do next. Model-based approaches decide which of several distinct maneuvers a vehicle might be undertaking. Data-driven approaches use training data to map a vehicle’s behavior to what we’ve seen other vehicles do in the past. Hybrid approaches combine models and data to predict where other vehicles will go next. All of this is crucial for making our own decisions about how to move.

Path Planning Lesson 2: Behavior Planning

At each step in time, the path planner must choose a maneuver to perform. In the Behavior Lesson, you’ll build finite-state machines to represent all of the different possible maneuvers your vehicle could choose. Your FSMs might include accelerate, decelerate, shift left, shift right, and continue straight. You’ll then construct a cost function that assigns a cost to each maneuver, and chooses the lowest-cost option.

Path Planning Lesson 3: Trajectory Generation

Trajectory Generation is taught by Emmanuel Boidot, from Mercedes-Benz’s Vehicle Intelligence team.

In the Trajectory Lesson, you’ll use C++ and the Eigen linear algebra library to build candidate trajectories for the vehicle to follow. Some of these trajectories might be unsafe, others might simply be uncomfortable. Your cost function will guide you to the best available trajectory for the vehicle to execute.

Path Planning Project: Highway Path Planner

https://youtu.be/L_TNyXdYFg8

Using the newest release of the Udacity simulator, you’ll build your very own path planner and put it to the test on the highway. Tie together your prediction, behavior, and trajectory engines from the previous lessons to create an end-to-end path planner that drives the car in traffic!

Month 2: Electives

Term 3 will launch with two electives: Advanced Deep Learning, and Functional Safety. We’ve selected these based on feedback from our hiring partners, and we’re very excited to give students the opportunity to gain deep knowledge in these topics.

Month 2 Elective: Advanced Deep Learning

Udacity has partnered with the NVIDIA Deep Learning Institute to build an advanced course on deep learning.

This module covers semantic segmentation, and inference optimization. Both of these topics are active areas of deep learning research.

Semantic segmentation identifies free space on the road at pixel-level granularity, which improves decision-making ability. Inference optimizations accelerate the speed at which neural networks can run, which is crucial for computational-intense models like the semantic segmentation networks you’ll study in this module.

Advanced Deep Learning Lesson 1: Fully Convolutional Networks

In this lesson, you’ll build and train fully convolutional networks that output an entire image, instead of just a classification. You’ll implement three special techniques that FCNs use: 1×1 convolutions, upsampling, and skip layers, to train your own FCN models.

Advanced Deep Learning Lesson 2: Scene Understanding

In this lesson, you’ll learn the strengths and weaknesses of bounding box networks, like YOLO and Single Shot Detectors. Then you’ll go a step beyond bounding box networks and build your own semantic segmentation networks. You’ll start with canonical models like VGG and ResNet. After removing their final, fully-connected layers, you can add the three special techniques you’ve already practiced: 1×1 convolutions, upsampling, and skip layers. Your result will be an FCN that classifies each road pixel in the image!

Advanced Deep Learning Lesson 3: Inference Optimizations

One of the challenges of semantic segmentation is that it requires a lot of computational power. In this lesson, you’ll learn how to accelerate network performance in production, using techniques such as fusion, quantization, and reduced precision.

Advanced Deep Learning Project: Semantic Segmentation

https://youtu.be/5uxCFwm3kcM

In the project at the end of the Advanced Deep Learning Module, you’ll build a semantic segmentation network to identify free space on the road. You’ll apply your knowledge of fully convolutional networks and their special techniques to create a semantic segmentation model that classifies each pixel of free space on the road. You’ll accelerate the network’s performance using inference optimizations like fusion, quantization, and reduced precision. You’ll be studying and implementing approaches used by top performers in the KITTI Road Detection Competition!

Month 2 Elective: Functional Safety

Together with Elektrobit, we’ve built a fun and comprehensive Functional Safety Module.

You’ll learn functional safety frameworks to ensure that vehicles is safe, both at the system and component levels.

Functional Safety Lesson 1: Introduction

You’ll build a functional safety case with Dheeraj, Stephanie, and Benjamin from Elektrobit.

In this lesson, Elektrobit’s experts will guide you through the high-level steps that the ISO 26262 standard requires for building a functional safety case. ISO 26262 is the world-recognized standard for automotive functional safety. Understanding the requirements of this standard gets you started on mastering a crucial field of autonomous vehicle development.

Functional Safety Lesson 2: Safety Plan

In this lesson, you’ll build a safety plan for a lane-keeping assistance feature. You’ll start with the same template that Elektrobit functional safety managers use, and add the information specific to your feature.

Functional Safety Lesson 3: Hazard Analysis and Risk Assessment

You’ll complete a hazard analysis and risk assessment for the lane-keeping assistance feature. As part of the HARA, you’ll brainstorm how the system might fail, including the operational mode, environmental details, and item usage of each hypothetical scenario. Your HARA will record the issues to monitor in your functional safety analysis.

Functional Safety Lesson 4: Functional Safety Concept

For each issue identified in the HARA, you’ll develop a functional safety concept that describes high-level performance requirements.

Functional Safety Lesson 5: Technical Safety Concept

You’ll translate high-level functional safety concept requirements into technical safety concept requirements that dictate specific performance parameters. At this point you’ll have concrete constraints for the system.

Functional Safety Lesson 6: Software and Hardware

Functional safety includes specific rules on how to implement hardware and software. In this lesson, you’ll learn about spatial, temporal, and communication interference, and how to guard against them. You’ll also review MISRA C++, the most common set of rules for writing C++ for automotive systems.

Functional Safety Project: Safety Case

You’ll use the guidance from your lessons to construct an end-to-end safety case for a lane departure warning feature. You’ll begin with the hazard analysis and risk assessment, and create further documentation for functional and technical safety concepts, and finally software and hardware requirements. Analyzing and documenting system safety is critical for autonomous vehicle development. These are skills that often only experienced automotive engineers possess!

System Integration

System integration is the final module of the Nanodegree program, and it’s the month where you actually get to put your code on the Udacity Self-Driving Car!

You’ll learn about the software stack that runs on “Carla,” our self-driving vehicle. Over the course of the final month of the program, you will work in teams to integrate software components, and get the car to drive itself around the Udacity test track.

Vehicle Subsystems

This lesson walks you through Carla’s key subsystems: sensors, perception, planning, and control. Eventually you’ll need to integrate software modules with these systems so that Carla can navigate the test track.

ROS and Autoware

Carla runs on two popular open-source automotive libraries: ROS and Autoware. In this lesson you’ll practice implementing ROS nodes and Autoware modules.

System Integration

During the final lesson of the program, you’ll integrate ROS nodes and Autoware modules with Carla’s software development environment. You’ll also learn how to transfer the code to the vehicle, and resolve issues that arise on real hardware, such as latency, dropped messages, and process crashing.

Project: Carla

https://youtu.be/tvwIX0pPtB0

This is the capstone project of the Nanodegree program! You will work with a team of students to integrate the skills you’ve developed over the last nine months. The goal is to build Carla’s software environment to successfully navigate Udacity’s test track.


When you complete Term 3, you will graduate from the program, and earn your Udacity Self-Driving Car Engineer Nanodegree credential. You will be ready to work on an autonomous vehicle team developing groundbreaking self-driving technology, and you will join a rarefied community of professionals who are committed to a world made better through this transformational technology.

See you in class!

Autonomous Vehicles Hurt Berkshire in Two Ways

Nearly all of my savings are in various index funds, but I do own stock in one, single individual company: Berkshire Hathaway.

It’s mostly for sentimental reasons. I went to Omaha a couple of times during business school: once for the Berkshire annual conference (“Woodstock for Capitalists”) and once to meet the Oracle himself, as part of a school trip.

I’ve known for a while that autonomous vehicles would hurt insurance, which is one big part of Berkshire’s business. The logic is that insurance companies only exist because drivers need to insure themselves against the costs of accidents. If accidents diminish, the need for insurance diminishes.

But a question at this year’s annual meeting pointed out that another big part of Berkshire’s business is highly vulnerable to autonomous vehicles: railroads.

Berkshire purchased the Burlington Northern Santa Fe (BNSF) railroad for $26.5 million in 2010 and it’s been a good investment.

That investment will come under intense pressure from self-driving trucks, however. Once trucks can operate nearly constantly, without the cost or physical limitations of a driver, the cost advantage of transportation by rail will diminish, or maybe even disappear completely.

Here’s Buffett’s exchange on this question, honest as ever.

Toyota Goes Public

A year and a half ago, Toyota announced that it would invest $1 billion into a new entity called the Toyota Research Institute (TRI).

And then…nothing else, really.

TRI has been pretty quiet for 18 months.

A few days ago, though, they broke their silence with a private track demonstration in Sonoma.

The platform is the second generation of the advanced safety research vehicle revealed to the public by Toyota at the 2013 Consumer Electronics Show. It is built on a current generation Lexus LS 600hL, which features a robust drive-by-wire interface. The 2.0 is designed to be a flexible, plug-and-play test platform that can be upgraded continuously and often. Its technology stack will be used to develop both of TRI’s core research paths: Chauffeur and Guardian systems. 
 
 Chauffeur refers to the always deployed, fully autonomous system classified by SAE as unrestricted Level 5 autonomy and Level 4 restricted and geo-fenced operation. 
 
 Guardian is a high-level driver assist system, constantly monitoring the driving environment inside and outside the vehicle, ready to alert the driver of potential dangers and stepping in when needed to assist in crash avoidance.

I’m excited to see Toyota share more of what they’re doing.

This is the world’s largest auto manufacturer, and I assume they will bring their A-game to the table.

NIO Sets Electric Autonomous Speed Record

The NIO EP9 is officially the fastest electric, autonomous car in the world.

The NIO EP9 electric supercar wasn’t content with merely entering the never-ending vehicular stat war — it recently set a couple of lap records at Austin’s Circuit of the Americas, including one for the fastest production car ever to run there. In case that wasn’t enough, it set a driverless lap record for the track, too. The startup automaker now claims that it is the fastest electric autonomous car around.

Jalopnik reports that NIO engineers built its autonomous software in just four months.

Did I mention that NIO is a hiring partner for Udacity’s Self-Driving Car Engineer Nanodegree Program?

Here’s a question-and-answer session between our students and NIO CEO Padmasree Warrior, with me moderating.

And here’s Padmasree and Sebastian Thrun at Udacity Talks!

The Self-Driving Polity that is Arizona

What’s different is that this time, Uber has the blessing from Arizona’s top politician, Governor Doug Ducey, a Republican, who is expected to be “Rider Zero” on an autonomous trip along with Anthony Levandowski, VP of Uber’s Advanced Technologies Group. The Arizona pilot comes after California’s Department of Motor Vehicles revoked the registration of Uber’s 16 self-driving cars because the company refused to apply for the appropriate permits for testing autonomous cars.

As Louis Brandeis said, the states are “laboratories of democracy”.

Read the whole thing (it’s short).

New Vehicle-to-Vehicle Communication Rules

Urban planner and historian Sarah Jo Peterson emails me that the US Department of Transportation just proposed a rule requiring automakers to include vehicle-to-vehicle communication hardware in new cars, and to use a common standard.

Of course, this is just a proposal. Before this could ever take effect, a new presidential administration will be in place and they might have their own views.

Peterson notes some concerns:

Are we moving to a world where bicycles need V2V and pedestrians need V2V? What does it mean for an act of mobility to require continuous government permission? (If you are not broadcasting, are you illegal? Will you be shut down in real time?)

I agree and would prefer if V2V arose as a de facto standard, instead of a de jure standard mandated by the government. This might be tougher for vehicle-to-infrastructure communication, which necessarily involves communication with government property, like traffic lights.

But if SMTP could rise as a de facto standard, the cause does not seem lost.

Meanwhile, Peterson points me to a Transportist blog post by David Levinson, arguing that in some scenarios, vehicle-to-vehicle communication may even be harmful in some scenarios.

The full blog post is hard to excerpt, but Levinson emphasizes that if we come to rely on vehicle-to-vehicle communication to navigate intersections (for example), a bug in the system or an unexpected event (he suggests a deer crossing the road) could bring traffic to a halt and possibly cause massive collisions.

I’m a little less pessimistic on that front, but Levinson is a professor of transportation and has been working on this problem for a decade, so I might defer to his logic.

Eight Days of Autonomous Vehicles

December 14, 2016:

Uber has expanded its self-driving taxi trial to the home of technology and autonomous vehicles; San Francisco. Starting from 14 December, Uber customers with a credit card attached to a San Francisco billing address are eligible to ride in a fleet of five self-driving cars.

December 22, 2016:

“Our cars departed for Arizona this morning by truck,” said an Uber spokesperson in an email to The Verge. “We’ll be expanding our self-driving pilot there in the next few weeks, and we’re excited to have the support of Governor Ducey.”

The move comes after California’s Department of Motor Vehicles revoked the registration of Uber’s 16 self-driving cars because the company refused to apply for the appropriate permits for testing autonomous cars.

This does not feel like progress.

Startup Watch: Blackmore

A startup called Blackmore just raised a few million dollars to miniaturize sensors for autonomous vehicles.

A few interesting points about Blackmore:

  1. They want to embed lidar in the grill of a car. This seems like a difficult vantage point, since the sensor won’t have a 360-degree view of the environment.
  2. They plan to deliver prototypes next summer.
  3. Based on their website, they seem to target two markets: autonomous vehicles and the military.
  4. They’re based in Bozeman, Montana, which is a great town, but hardly a tech hub. Given the cost of housing in Silicon Valley, though, I’m tempted to apply for a job there right now.

Delphi and Mobileye Roll Together

The San Francisco Chronicle got an up-close and personal look at Delphi’s partnership with Mobileye, and the self-driving cars that partnership has produced:

With the race to develop self-driving cars now at an all-out sprint, Delphi and Mobileye believe they possess an edge.

They have developed a system for crowdsourcing the hyper-detailed 3-D maps upon which autonomous vehicles rely. Millions of non-autonomous cars that use Mobileye cameras for lane keeping or collision prevention will create a constant stream of data to map roads and potential obstacles, even temporary ones such as road repair crews or double-parked cars.

Also, this:

“You can’t develop autonomous cars that just follow all the rules, because they’ll just clog cities,” [Mobileye executive Dan] Galves said. “The point is really providing the intelligence and the rules of breaking the rules, if you will — providing some human intuition into the vehicles.”