C++ vs. Python for Automotive Software

This afternoon I posted a long response to a question about how we will use C++ vs. Python in the Udacity Self-Driving Car Nanodegree Program, and how automotive engineers use those languages on the job.

You can read my full response, but here’s the part where I focus on how automotive engineers write software on the job:

Autonomous vehicle engineers on the job tend to use a variety of languages, depending on their team, their facility with different languages, the APIs their tools expose, and performance requirements.

C++ is a compiled, high-performance language, so most code that actually runs on the vehicle tends to be C++.

That said, many engineers spend most of their time prototyping algorithms in Python, Matlab, or even Java or other languages. Other engineers spend pretty much all of their time writing production code in C / C++.

Machine learning engineers often spend a lot of time in Python, because libraries like TensorFlow rely on Python for their primary APIs. TensorFlow does a lot of the heavy lifting in terms of compiling networks for faster performance.

Addendum: Since I have been asked several times recently, my favorite C++ book is Modern C++ Programming with Test-Driven Development, by Jeff Langr. Unfortunately, Udacity does not yet have a C++ course. There appear to be C++ courses on Coursera and edX but I have not reviewed them yet.

Making Motorcycling Safer

From the Department of Progress, Automotive News has a thinkpiece up about how self-driving cars will make motorcycling safer.

The improvement apparently will come largely from left turns:

This year, about 1,000 riders in the U.S. will lose their lives to the left turns of others. Cars traveling in the same direction as the motorcycle often don’t notice the bike overtaking on the left. Cars making a turn while coming from the opposite direction either fail to see the oncoming bike, or misjudge its speed.

And apparently this is a good time to buy Harley-Davidson stock:

Once every aspiring biker realizes that the driver next to him isn’t an existential threat, sales will climb in some places. Xavier Mosquet, a senior partner at Boston Consulting Group, said the bike boost will be most pronounced in markets such as the U.S., where people ride for fun, and in China and India, where many choose motorbikes because they are relatively inexpensive transportation.

Conversely, in such places as Europe. where motorcycles are often the best way to avoid traffic, self-driving cars may actually dent sales, according to Mosquet. If all goes as planned, there will be fewer tie-ups or accidents, less rubbernecking, and thus less to be gained by jumping on a bike and splitting lanes of standstill traffic.

Self-driving motorcycles, however, are still quite a ways off. Here’s a visual explanation of why.

In fairness, if I’ve heard Sebastian Thrun tell the story right, the head of that team was Anthony Levandowski, who went on to found Otto and now runs Uber’s self-driving car program. So he’s done well.

The Right Car, Right Now

Stability vs. Flexibility

Buying and renting anything — a home, a car, a movie — involves a tradeoff between stability and flexibility. Buying provides the stability of permanent ownership and availability, whereas renting provides the flexibility of adjustment to fit changing needs and wants.

The automotive market is moving from an ownership model to a rental model, as ride-sharing services push the stability-flexibility trade in favor of renting, rather than owning. And what we’ve seen with ride-sharing is just the tip of the iceberg. Self-driving cars will push this tradeoff an order of magnitude further.

Mass Customization

As consumers come to value flexibility in transportation, we can take lessons from the manufacturing industry on the practice of mass customization.

Today, car buyers have to purchase a one-size-fits-all vehicle. If I need to drive in snow twenty days a year, I might get a four-wheel drive vehicle, even though I would be better off with a compact car the other 345 days. Similar considerations govern the purchase of a car capable of occasional carpooling, or downtown parking, or a client visit.

In the self-driving car future, we’ll be able to rent the car we want, and the companies that win will get good at doing this really fast.

Need a minivan this morning? It’ll be there in 30 seconds.

Want a convertible this evening? It’ll be there in 45 seconds.

What Do People Want?

In this world, getting the right car to somebody’s door in 60 seconds or less might be the easy part. Mass customization has been studied and optimized and is mostly a solved problem.

The harder challenge is to figure out what people want.

We have some basic starting points: sedans, vans, SUVs, pickups, sports cars.

But these are all built for human drivers in a one-size-fits-all world.

In a mass customization world, we no longer have to make tradeoffs between scenario. We can tune each vehicle option to a specific use case.

It could even be that we’ll hail one car service if we want a maneuverable short-haul vehicle, and a different service if we want a fast, long-haul vehicle.

What kinds of vehicles would you like to see in a self-driving world?

How do you envision the future of vehicle mass customization? Share your thoughts in the comments. Thanks!

Ride-Sharing Doesn’t Work with a Phone

The Cubs-Giants game went thirteen very long innings last night and ended in heartbreak (for me, at least), with the Giants knocking in a walk-off run in the bottom of the 13th inning.

It was also 11:45pm and the game had been going on for five hours.

As I stumbled out of the stadium, I realized my phone was totally dead. Five hours of emails and web browsing between innings had drained the battery.

If my phone had been working, I might have just hailed an Uber home and tucked into bed. But my phone wasn’t working.

No worries, though! In San Francisco, the train station is just blocks from the ballpark. I hustled on over to Caltrain, waited forever for the train to leave, and then learned I got on the wrong train. The train I was riding wouldn’t make its first stop until 8 miles past my house.

I disembarked the first chance I could and walked into an empty parking lot at the Belmont Caltrain Station at 12:45am. No taxis.

A gas station light flickered across the street and I rolled over and begged the attendant to call a cab. No cabs available.

Then I bought a charger from the station’s inventory and hailed an Uber, which took twenty minutes to arrive, being past midnight in the suburbs.

I finally tucked into bed at 1:30am.

So what’s the moral of the story?

Mostly that I shouldn’t have totally drained my phone battery, and I should look at train schedules.

But also that, in the days before ride-sharing, it was more common to have taxis circling around and you didn’t need a phone to hail them.

The world today is a better place because of Lyft and Uber, but it does require a phone to navigate.

The Race Continues

The self-driving car race continues, with new entrants signaling that California remains the location of choice (or at least “a” location of choice) for autonomous vehicle development.

Wheego, an Atlanta-based assembler of low-speed, electric vehicles, has applied for a testing permit.

As has Valeo, a large French automotive supplier.

The encouraging sign here is that both Wheego and Valeo are out of the mold of previous California autonomous vehicle licensees. Whereas previous licensees have mostly been OEMs (Ford, Mercedes), tech companies (Google, Apple), or startups (Otto, Cruise, Drive.ai), these entrants are more esoteric.

Valeo, in particular, represents wider automotive industry coming to Silicon Valley, which is terrific.

Google Hits Two Million Miles

Google’s Self-Driving Cars just hit two million miles of real-world, public road driving experience.

Dmitri Dolgov, Google’s Head of Self-Driving Car Technology, explains how one of the major challenges now is to pass what I might call the “automotive Turing test”:

Over the last year, we’ve learned that being a good driver is more than just knowing how to safely navigate around people, but also knowing how to interact with them.

In a delicate social dance, people signal their intentions in a number of ways. For example, merging into traffic during rush hour is an exercise in negotiation: I’d like to cut in. May I cut in? If I speed up a little and move into the lane, will you slow down and leave me room, or will you speed up? So much of driving relies on these silent conversations conducted via gentle nudge-and-response. Because we’ve observed or interacted with hundreds of millions of vehicles, pedestrians and cyclists, our software is much better at reliably predicting the trajectory, speed, and intention of other road users. Our cars can often mimic these social behaviors and communicate our intentions to other drivers, while reading many cues that tell us if we’re able to pass, cut in or merge.

Startup Watch: NAUTO

Advanced Driver Assistance startup NAUTO just announced splashy partnerships with BMW, Toyota, and the German insurance giant Allianz.

The Allianz participation is particularly interesting, because it touches on lots of the privacy concerns raised by autonomous vehicles. From NAUTO’s website:

NAUTO’s artificial intelligence-driven connected camera and smart-cloud provide auto insurers a complete, context-rich picture of driver behavior and fleet risk, in real-time. NAUTO detects driver attention, coaches drivers and warns of collisions, keeps fleet managers in touch with their drivers and helps them optimize vehicle deployment.

It sounds like NAUTO is helping insurance companies score drivers, which the insurers would presumably use to offer more customized rates.

This seems like it could be a huge win for any one insurer — get a leg up on the competition — but it might be a race to the bottom if every insurer is able to get their hands on some version of this data.

At some level, insurance pricing is based on the insurer taking on the risk in exchange for a fee. If there’s a lot less risk, mostly because drivers and insurers know more about each driver’s behavior, then there’s less need to pay insurers to manage the diminishing risk.

But I’m hardly an insurance executive and I’d be curious to learn their take on this.

Didi is Hiring

Our partner Didi Chuxing is the largest ride-sharing service in China. And it’s looking to hire self-driving car engineers!

Didi founder and CEO Cheng Wei said he is hunting for data scientists in Silicon Valley to develop a self-driving car. Didi Chuxing bought Uber China in a $35 billion deal over the summer.

Cheng added that he’s also been in talks with Gansha Wu, the former director of Intel Labs who also founded UiSee Technology, a Beijing-based self-driving car company.

Didi is a terrific partner and we are lucky to have their support.

Term 1: In-Depth on Udacity’s Self-Driving Car Curriculum

Update: Udacity has a new self-driving car curriculum! The post below is now out-of-date, but you can see the new syllabus here.

Last night we offered acceptances to thousands of students who are excited to join Udacity’s Self-Driving Car Nanodegree Program!

We are working hard to make this the world’s best training program for self-driving car engineers. The entire curriculum will consist of three terms over nine months. Here’s what in the program:

Term 1

Introduction

  1. Meet the instructors — Sebastian Thrun, Ryan Keenan, and myself. Learn about the systems that comprise a self-driving car, and the structure of the program.
  2. Project: Detect Lane Lines
    Detect highway lane lines from a video stream. Use OpenCV image analysis techniques to identify lines, including Hough transforms and Canny edge detection.

Deep Learning

  1. Machine Learning: Review fundamentals of machine learning, including regression and classification.
  2. Neural Networks: Learn about perceptrons, activation functions, and basic neural networks. Implement your own neural network in Python.
  3. Logistic Classifier: Study how to train a logistic classifier, using machine learning. Implement a logistic classifier in TensorFlow.
  4. Optimization: Investigate techniques for optimizing classifier performance, including validation and test sets, gradient descent, momentum, and learning rates.
  5. Rectified Linear Units: Evaluate activation functions and how they affect performance.
  6. Regularization: Learn techniques, including dropout, to avoid overfitting a network to the training data.
  7. Convolutional Neural Networks: Study the building blocks of convolutional neural networks, including filters, stride, and pooling.
  8. Project: Traffic Sign Classification
    Implement and train a convolutional neural network to classify traffic signs. Use validation sets, pooling, and dropout to choose a network architecture and improve performance.
  9. Keras: Build a multi-layer convolutional network in Keras. Compare the simplicity of Keras to the flexibility of TensorFlow.
  10. Transfer Learning: Finetune pre-trained networks to solve your own problems. Study cannonical networks such as AlexNet, VGG, GoogLeNet, and ResNet.
  11. Project: Behavioral Cloning
    Architect and train a deep neural network to drive a car in a simulator. Collect your own training data and use it to clone your own driving behavior on a test track.

Computer Vision

  1. Cameras: Learn the physics of cameras, and how to calibrate, undistort, and transform image perspectives.
  2. Lane Finding: Study advanced techniques for lane detection with curved roads, adverse weather, and varied lighting.
  3. Project: Advanced Lane Detection
    Detect lane lines in a variety of conditions, including changing road surfaces, curved roads, and variable lighting. Use OpenCV to implement camera calibration and transforms, as well as filters, polynomial fits, and splines.
  4. Support Vector Machines: Implement support vector machines and apply them to image classification.
  5. Decision Trees: Implement decision trees and apply them to image classification.
  6. Histogram of Oriented Gradients: Implement histogram of oriented gradients and apply it to image classification.
  7. Deep Neural Networks: Compare the classification performance of support vector machines, decision trees, histogram of oriented gradients, and deep neural networks.
  8. Vehicle Tracking: Review how to apply image classification techniques to vehicle tracking, along with basic filters to integrate vehicle position over time.
  9. Project: Vehicle Tracking
    Track vehicles in camera images using image classifiers such as SVMs, decision trees, HOG, and DNNs. Apply filters to fuse position data.

Term 2

Sensor Fusion

Our terms are broken out into modules, which are in turn comprised of a series of focused lessons. This Sensor Fusion module is built with our partners at Mercedes-Benz. The team at Mercedes-Benz is amazing. They are world-class automotive engineers applying autonomous vehicle techniques to some of the finest vehicles in the world. They are also Udacity hiring partners, which means the curriculum we’re developing together is expressly designed to nurture and advance the kind of talent they would like to hire!

Lidar Point Cloud

Below please find descriptions of each of the lessons that together comprise our Sensor Fusion module:

  1. Sensors
    The first lesson of the Sensor Fusion Module covers the physics of two of the most import sensors on an autonomous vehicle — radar and lidar.
  2. Kalman Filters
    Kalman filters are the key mathematical tool for fusing together data. Implement these filters in Python to combine measurements from a single sensor over time.
  3. C++ Primer
    Review the key C++ concepts for implementing the Term 2 projects.
  4. Project: Extended Kalman Filters in C++
    Extended Kalman filters are used by autonomous vehicle engineers to combine measurements from multiple sensors into a non-linear model. Building an EKF is an impressive skill to show an employer.
  5. Unscented Kalman Filter
    The Unscented Kalman filter is a mathematically-sophisticated approach for combining sensor data. The UKF performs better than the EKF in many situations. This is the type of project sensor fusion engineers have to build for real self-driving cars.
  6. Project: Pedestrian Tracking
    Fuse noisy lidar and radar data together to track a pedestrian.

Localization

This module is also built with our partners at Mercedes-Benz, who employ cutting-edge localization techniques in their own autonomous vehicles. Together we show students how to implement and use foundational algorithms that every localization engineer needs to know.

Particle Filter

Here are the lessons in our Localization module:

  1. Motion
    Study how motion and probability affect your belief about where you are in the world.
  2. Markov Localization
    Use a Bayesian filter to localize the vehicle in a simplified environment.
  3. Egomotion
    Learn basic models for vehicle movements, including the bicycle model. Estimate the position of the car over time given different sensor data.
  4. Particle Filter
    Use a probabilistic sampling technique known as a particle filter to localize the vehicle in a complex environment.
  5. High-Performance Particle Filter
    Implement a particle filter in C++.
  6. Project: Kidnapped Vehicle
    Implement a particle filter to take real-world data and localize a lost vehicle.

Control

This module is built with our partners at Uber Advanced Technologies Group. Uber is one of the fastest-moving companies in the autonomous vehicle space. They are already testing their self-driving cars in multiple locations in the US, and they’re excited to introduce students to the core control algorithms that autonomous vehicles use. Uber ATG is also a Udacity hiring partner, so pay attention to their lessons if you want to work there!


Here are the lessons:

  1. Control
    Learn how control systems actuate a vehicle to move it on a path.
  2. PID Control
    Implement the classic closed-loop controller — a proportional-integral-derivative control system.
  3. Linear Quadratic Regulator
    Implement a more sophisticated control algorithm for stabilizing the vehicle in a noisy environment.
  4. Project: Lane-Keeping
    Implement a controller to keep a simulated vehicle in its lane. For an extra challenge, use computer vision techniques to identify the lane lines and estimate the cross-track error.

Term 3

Path Planning

Elective

Systems


Term 2 and Term 3 are under construction and we’ll share more details on those as we finalize the curriculum and projects.

[Update: Term 2 and Term 3 are live!]

All of this, including Term 1, is subject to change as we update the curriculum over time, because part of building a great course is taking feedback and making improvements!

If you’ve been accepted into the course, congratulations! We are excited to teach you.

If we suggested you brush up on a few topics and take a self-assessment before joining the course, please do! We are excited to teach you and want to make sure you have a great experience.

And if you haven’t yet applied, please do! We are taking applications for the 2017 cohorts and would love to have you in the class.