Mithi published a three-part series about what she calls âthe most difficult project yetâ of the Nanodegree Program. In Part 1, she outlines the goals and constraints of the project, and decides on how to approach the solution. Part 2 covers the architecture of the solution, including the classes Mithi developed and the math for trajectory generation. Part 3 covers implementation, behavior planning, cost functions, and some extra considerations that could be added to improve the planner. This is a great series to review if youâre just starting the project.
âI decided that I should start with a simple model with many simple assumptions and work from there. If the assumption does not work then I will then make my model more complex. I should keep it simple (stupid!).
A programmer should not add functionality until deemed necessary. Always implement things when you actually need them, never when you just foresee that you need them. A famous programmer said that somewhere.
My design principle is, make everything simple if you can get away with it.â
Mohan takes a different approach to path planning, in which he combines a cost function with a feasibility checklist. He builds a cost function and then ranks each lane by how it does on a cost function. Then he decides whether to move to a lane based on the feasibility checklist.
âThis comes down to two things (and Iâm going to be specific to highway scenario).
Estimating a score for each lane, to determine the best lane for us to be in (efficiency)
Evaluating the feasibility of moving to that lane in the immediate future (safety & comfort)â
The 11th post in Andrewâs series on the Nanodegree Program covers Term 3 broadly and path planning specifically. In particular, Andrew lays out where this path planning project falls in the taxonomy of autonomous driving, and the high-level inputs and outputs of a path planner. This is a great post to review if youâre interested in what a path planner does.
âI found the path planning project challenging, in large part due to fact that we are implementing SAE Level 4 functionality in C++ and the complexity that comes with the interactions required between the various modules.â
These examples make clear the vision, skill, and tenacity our students are applying to even the most difficult challenges, and itâs a real pleasure to share their incredible work. It wonât be long before these talented individuals graduate the program, and begin making significant, real-world contributions to the future of self-driving cars. I know I speak for everyone at Udacity when I say that Iâm very excited for the future theyâre going to help build!
In just a few days, weâre going to begin releasing Term 3 of the Udacity Self-Driving Car Engineer Nanodegree Program, and we could not be more excited! This is the final term of a nine-month Nanodegree program that covers the entire autonomous vehicle technology stack, and as such, itâs the culmination of an educational journey unlike any other in the world.
When you complete Term 3 and graduate from this program, you will emerge with an amazing portfolio of projects that will enable you to launch a career in the autonomous vehicle industry, and you will have gained experience and skills that are virtually impossible to acquire anywhere else. Some of our earliest students, like George Sung, Robert Ioffe, and Patrick Kern, have already started their careers in self-driving cars, and weâre going to help you do the same!
Term 3
This term is three months long, and features a different module each month.
The first month focuses on path planning, which is basically the brains of a self-driving car. This is how the vehicle decides where to go and how to get there.
The second month presents an opportunity to specialize with an elective; this is your chance to delve deeply into a particular topic, and emerge with a unique degree of expertise that could prove to be a key competitive differentiator when you enter the job market. We want your profile to stand out to prospective employers, and specialization is a great way to achieve this.
The final month is truly an Only At Udacity experience. In this System Integration Module, you will get to put your code on Udacityâs very own self-driving car! Youâll get to work with a team of students to test out your skills in the real world. We know firsthand from our hiring partners in the autonomous vehicle space that this one of the things they value most in Udacity candidates; the combination of software skills and real-world experience.
Month 1: Path Planning
Path planning is the brains of a self-driving car. Itâs how a vehicle decides how to get where itâs going, both at the macro and micro levels. Youâll learn about three core components of path planning: environmental prediction, behavioral planning, and trajectory generation.
Best of all, this module is taught by our partners at Mercedes-Benz Research & Development North America. Their participation ensures that the module focuses specifically on material job candidates in this field need to know.
Path Planning Lesson 1: Environmental Prediction
In the Prediction Lesson, youâll use model-based, data-driven, and hybrid approaches to predict what other vehicles around you will do next. Model-based approaches decide which of several distinct maneuvers a vehicle might be undertaking. Data-driven approaches use training data to map a vehicleâs behavior to what weâve seen other vehicles do in the past. Hybrid approaches combine models and data to predict where other vehicles will go next. All of this is crucial for making our own decisions about how to move.
Path Planning Lesson 2: Behavior Planning
At each step in time, the path planner must choose a maneuver to perform. In the Behavior Lesson, youâll build finite-state machines to represent all of the different possible maneuvers your vehicle could choose. Your FSMs might include accelerate, decelerate, shift left, shift right, and continue straight. Youâll then construct a cost function that assigns a cost to each maneuver, and chooses the lowest-cost option.
Path Planning Lesson 3: Trajectory Generation
Trajectory Generation is taught by Emmanuel Boidot, from Mercedes-Benzâs Vehicle Intelligence team.
In the Trajectory Lesson, youâll use C++ and the Eigen linear algebra library to build candidate trajectories for the vehicle to follow. Some of these trajectories might be unsafe, others might simply be uncomfortable. Your cost function will guide you to the best available trajectory for the vehicle to execute.
Using the newest release of the Udacity simulator, youâll build your very own path planner and put it to the test on the highway. Tie together your prediction, behavior, and trajectory engines from the previous lessons to create an end-to-end path planner that drives the car in traffic!
Month 2: Electives
Term 3 will launch with two electives: Advanced Deep Learning, and Functional Safety. Weâve selected these based on feedback from our hiring partners, and weâre very excited to give students the opportunity to gain deep knowledge in these topics.
This module covers semantic segmentation, and inference optimization. Both of these topics are active areas of deep learning research.
Semantic segmentation identifies free space on the road at pixel-level granularity, which improves decision-making ability. Inference optimizations accelerate the speed at which neural networks can run, which is crucial for computational-intense models like the semantic segmentation networks youâll study in this module.
Advanced Deep Learning Lesson 1: Fully Convolutional Networks
In this lesson, youâll build and train fully convolutional networks that output an entire image, instead of just a classification. Youâll implement three special techniques that FCNs use: 1×1 convolutions, upsampling, and skip layers, to train your own FCN models.
Advanced Deep Learning Lesson 2: Scene Understanding
In this lesson, youâll learn the strengths and weaknesses of bounding box networks, like YOLO and Single Shot Detectors. Then youâll go a step beyond bounding box networks and build your own semantic segmentation networks. Youâll start with canonical models like VGG and ResNet. After removing their final, fully-connected layers, you can add the three special techniques youâve already practiced: 1×1 convolutions, upsampling, and skip layers. Your result will be an FCN that classifies each road pixel in the image!
Advanced Deep Learning Lesson 3: Inference Optimizations
One of the challenges of semantic segmentation is that it requires a lot of computational power. In this lesson, youâll learn how to accelerate network performance in production, using techniques such as fusion, quantization, and reduced precision.
Advanced Deep Learning Project: Semantic Segmentation
In the project at the end of the Advanced Deep Learning Module, youâll build a semantic segmentation network to identify free space on the road. Youâll apply your knowledge of fully convolutional networks and their special techniques to create a semantic segmentation model that classifies each pixel of free space on the road. Youâll accelerate the networkâs performance using inference optimizations like fusion, quantization, and reduced precision. Youâll be studying and implementing approaches used by top performers in the KITTI Road Detection Competition!
Month 2 Elective: Functional Safety
Together with Elektrobit, weâve built a fun and comprehensive Functional Safety Module.
Youâll learn functional safety frameworks to ensure that vehicles is safe, both at the system and component levels.
Functional Safety Lesson 1: Introduction
Youâll build a functional safety case with Dheeraj, Stephanie, and Benjamin from Elektrobit.
In this lesson, Elektrobitâs experts will guide you through the high-level steps that the ISO 26262 standard requires for building a functional safety case. ISO 26262 is the world-recognized standard for automotive functional safety. Understanding the requirements of this standard gets you started on mastering a crucial field of autonomous vehicle development.
Functional Safety Lesson 2: Safety Plan
In this lesson, youâll build a safety plan for a lane-keeping assistance feature. Youâll start with the same template that Elektrobit functional safety managers use, and add the information specific to your feature.
Functional Safety Lesson 3: Hazard Analysis and Risk Assessment
Youâll complete a hazard analysis and risk assessment for the lane-keeping assistance feature. As part of the HARA, youâll brainstorm how the system might fail, including the operational mode, environmental details, and item usage of each hypothetical scenario. Your HARA will record the issues to monitor in your functional safety analysis.
Youâll translate high-level functional safety concept requirements into technical safety concept requirements that dictate specific performance parameters. At this point youâll have concrete constraints for the system.
Functional safety includes specific rules on how to implement hardware and software. In this lesson, youâll learn about spatial, temporal, and communication interference, and how to guard against them. Youâll also review MISRA C++, the most common set of rules for writing C++ for automotive systems.
Functional Safety Project: Safety Case
Youâll use the guidance from your lessons to construct an end-to-end safety case for a lane departure warning feature. Youâll begin with the hazard analysis and risk assessment, and create further documentation for functional and technical safety concepts, and finally software and hardware requirements. Analyzing and documenting system safety is critical for autonomous vehicle development. These are skills that often only experienced automotive engineers possess!
System Integration
System integration is the final module of the Nanodegree program, and itâs the month where you actually get to put your code on the Udacity Self-Driving Car!
Youâll learn about the software stack that runs on âCarla,â our self-driving vehicle. Over the course of the final month of the program, you will work in teams to integrate software components, and get the car to drive itself around the Udacity test track.
Vehicle Subsystems
This lesson walks you through Carlaâs key subsystems: sensors, perception, planning, and control. Eventually youâll need to integrate software modules with these systems so that Carla can navigate the test track.
ROS and Autoware
Carla runs on two popular open-source automotive libraries: ROS and Autoware. In this lesson youâll practice implementing ROS nodes and Autoware modules.
System Integration
During the final lesson of the program, youâll integrate ROS nodes and Autoware modules with Carlaâs software development environment. Youâll also learn how to transfer the code to the vehicle, and resolve issues that arise on real hardware, such as latency, dropped messages, and process crashing.
This is the capstone project of the Nanodegree program! You will work with a team of students to integrate the skills youâve developed over the last nine months. The goal is to build Carlaâs software environment to successfully navigate Udacityâs test track.
When you complete Term 3, you will graduate from the program, and earn your Udacity Self-Driving Car Engineer Nanodegree credential. You will be ready to work on an autonomous vehicle team developing groundbreaking self-driving technology, and you will join a rarefied community of professionals who are committed to a world made better through this transformational technology.
Nearly all of my savings are in various index funds, but I do own stock in one, single individual company: Berkshire Hathaway.
Itâs mostly for sentimental reasons. I went to Omaha a couple of times during business school: once for the Berkshire annual conference (âWoodstock for Capitalistsâ) and once to meet the Oracle himself, as part of a school trip.
Iâve known for a while that autonomous vehicles would hurt insurance, which is one big part of Berkshireâs business. The logic is that insurance companies only exist because drivers need to insure themselves against the costs of accidents. If accidents diminish, the need for insurance diminishes.
But a question at this yearâs annual meeting pointed out that another big part of Berkshireâs business is highly vulnerable to autonomous vehicles: railroads.
Berkshire purchased the Burlington Northern Santa Fe (BNSF) railroad for $26.5 million in 2010 and itâs been a good investment.
That investment will come under intense pressure from self-driving trucks, however. Once trucks can operate nearly constantly, without the cost or physical limitations of a driver, the cost advantage of transportation by rail will diminish, or maybe even disappear completely.
The platform is the second generation of the advanced safety research vehicle revealed to the public by Toyota at the 2013 Consumer Electronics Show. It is built on a current generation Lexus LS 600hL, which features a robust drive-by-wire interface. The 2.0 is designed to be a flexible, plug-and-play test platform that can be upgraded continuously and often. Its technology stack will be used to develop both of TRIâs core research paths: Chauffeur and Guardian systems.   Chauffeur refers to the always deployed, fully autonomous system classified by SAE as unrestricted Level 5 autonomy and Level 4 restricted and geo-fenced operation.   Guardian is a high-level driver assist system, constantly monitoring the driving environment inside and outside the vehicle, ready to alert the driver of potential dangers and stepping in when needed to assist in crash avoidance.
Iâm excited to see Toyota share more of what theyâre doing.
This is the worldâs largest auto manufacturer, and I assume they will bring their A-game to the table.
The NIO EP9 electric supercar wasnât content with merely entering the never-ending vehicular stat warâââit recently set a couple of lap records at Austinâs Circuit of the Americas, including one for the fastest production car ever to run there. In case that wasnât enough, it set a driverless lap record for the track, too. The startup automaker now claims that it is the fastest electric autonomous car around.
Jalopnik reports that NIO engineers built its autonomous software in just four months.
Whatâs different is that this time, Uber has the blessing from Arizonaâs top politician, Governor Doug Ducey, a Republican, who is expected to be âRider Zeroâ on an autonomous trip along with Anthony Levandowski, VP of Uberâs Advanced Technologies Group. The Arizona pilot comes after Californiaâs Department of Motor Vehicles revoked the registration of Uberâs 16 self-driving cars because the company refused to apply for the appropriate permits for testing autonomous cars.
Of course, this is just a proposal. Before this could ever take effect, a new presidential administration will be in place and they might have their own views.
Peterson notes some concerns:
Are we moving to a world where bicycles need V2V and pedestrians need V2V? What does it mean for an act of mobility to require continuous government permission? (If you are not broadcasting, are you illegal? Will you be shut down in real time?)
I agree and would prefer if V2V arose as a de facto standard, instead of a de jure standard mandated by the government. This might be tougher for vehicle-to-infrastructure communication, which necessarily involves communication with government property, like traffic lights.
But if SMTP could rise as a de facto standard, the cause does not seem lost.
The full blog post is hard to excerpt, but Levinson emphasizes that if we come to rely on vehicle-to-vehicle communication to navigate intersections (for example), a bug in the system or an unexpected event (he suggests a deer crossing the road) could bring traffic to a halt and possibly cause massive collisions.
Iâm a little less pessimistic on that front, but Levinson is a professor of transportation and has been working on this problem for a decade, so I might defer to his logic.
Uber has expanded its self-driving taxi trial to the home of technology and autonomous vehicles; San Francisco. Starting from 14 December, Uber customers with a credit card attached to a San Francisco billing address are eligible to ride in a fleet of five self-driving cars.
âOur cars departed for Arizona this morning by truck,â said an Uber spokesperson in an email to The Verge. âWeâll be expanding our self-driving pilot there in the next few weeks, and weâre excited to have the support of Governor Ducey.â
The move comes after Californiaâs Department of Motor Vehicles revoked the registration of Uberâs 16 self-driving cars because the company refused to apply for the appropriate permits for testing autonomous cars.
They want to embed lidar in the grill of a car. This seems like a difficult vantage point, since the sensor wonât have a 360-degree view of the environment.
They plan to deliver prototypes next summer.
Based on their website, they seem to target two markets: autonomous vehicles and the military.
Theyâre based in Bozeman, Montana, which is a great town, but hardly a tech hub. Given the cost of housing in Silicon Valley, though, Iâm tempted to apply for a job there right now.
The San Francisco Chronicle got an up-close and personal look at Delphiâs partnership with Mobileye, and the self-driving cars that partnership has produced:
With the race to develop self-driving cars now at an all-out sprint, Delphi and Mobileye believe they possess an edge.
They have developed a system for crowdsourcing the hyper-detailed 3-D maps upon which autonomous vehicles rely. Millions of non-autonomous cars that use Mobileye cameras for lane keeping or collision prevention will create a constant stream of data to map roads and potential obstacles, even temporary ones such as road repair crews or double-parked cars.
Also, this:
âYou canât develop autonomous cars that just follow all the rules, because theyâll just clog cities,â [Mobileye executive Dan] Galves said. âThe point is really providing the intelligence and the rules of breaking the rules, if you willâââproviding some human intuition into the vehicles.â