Lyft’s Autonomous Ridesharing Platform

nuTonomy is partnering with Lyft to launch self-driving cars in Boston this year.

While nuTonomy has been targeting self-driving cars in Boston for a while, this is great news for Lyft. Lyft continues to expand its platform as a provider of ridesharing infrastructure, while letting other companies figure out the actual autonomous technology.

Lyft turned its much-smaller-than-Uber size to its advantage here, by credibly committing not to develop autonomous vehicles. That presumably makes it a more attractive partner than Uber, which is developing its own self-driving technology and thus might have conflicts of interest.

I am on-record as a vocal supporter of Uber ATG, whose staff have been terrific partners for the Udacity Self-Driving Car Nanodegree Program. But it also seems likely that all of the negative news coming out of Uber this year might be causing other companies to second-guess partnerships or vendor-supplier relationships with Uber. Of course, that redounds to Lyft’s benefit.

Lyft, through a combination of using a presumed weakness to their advantage, and through avoiding unforced errors, is having a pretty great 2017.

How to Guides from Udacity Self-Driving Car Students

Here are some great how-to guides from Udacity students! Everything from how to find a job to how to build a self-driving (minature) car 🙂

Becoming a Self-Driving Car & Machine Learning Engineer

George Sung

George landed a job working on deep learning with BMW’s autonomous vehicle team in Silicon Valley! His stats on the hiring funnel are instructive for anybody interviewing in software, and especially in this industry.

“I had 9 interviews out of my ~90 job applications, i.e. around 10% of applications lead to interviews. In my mind this was a pretty good conversion rate. Out of those 9 interviews, 4 of them lead to final-round interviews: 2 final-round interviews for full-time jobs, 2 final-round interviews for internships. I did well on those 4 interviews as they all lead to offers.”

How I Landed My Dream Job Working On Self-driving Cars

Galen Ballew

Galen got a job working on autonomous vehicles at HERE’s Boulder, Colorado, office! It’s also a great example of how being flexible about roles (Galen is starting on the DevOps team) can help you get a foot in the door with autonomous vehicle teams.

“Mathematics is a wonderful thing, but it’s not very career specific. Just a few months after graduating, I made two very important decisions: to enroll at Metis and to enroll in the Udacity Self-driving Car Engineer Nanodegree (SDCEND). Both of these were instrumental in my career path, but the Udacity SDCEND was critical.”

Ubuntu + Deep Learning Software Installation Guide

Nick Condo

In the Udacity Self-Driving Car Nanodegree Program, we provide an AWS AMI for utilizing NVIDIA GPUs for accelerating deep learning. We don’t, however, explain how to set up this software on your own machine. Probably we should do that. In the meantime, Nick has this terrific guide.

“There are a number of good installation guides out there — particularly this one from floydhub that much of this is based on — but I found myself having to dig through many different resources to get everything installed properly. The goal of this article is to consolidate all the necessary resources into one place.”

How I use Docker for Robotics Development

Jari Safi

“ros skillz pay Jari’s billz”, and here he walks through how to get ROS set up using the Docker virtual environment.

“Image: This is essentially the “installation” of something that you want to run using Docker. An image contains all the data necessary to run containers. Images are hierarchical and a new image that shares information with an older one will not reproduce this information and instead just re-use it (i.e. if you have two Ubuntu based images with different software installed, they will both refer to the same base Ubuntu image rather than copy its contents). This is what people mean when they say that Docker’s filesystem is layered.”

Building Self-Driving RC Car Series #1 — Equipment & Plan

Yazeed Alrubyli

This is the first part of Yazeed’s multi-part series on how to build a deep-learning powered miniature autonomous vehicle. Super cool!

“I decided to build my first self-driving car, I mean RC Car 😅 . I think I already have the knowledge and tools to start crafting my RC’s future.”

Visiting Japan

Last week I was in Japan, meeting with Udacity students and with Japanese automotive companies. It was a lot of fun, and it was exciting to see the work that Japanese automotive companies are putting into autonomous vehicles!

Japan is home to a dozen large automotive manufacturers: Toyota, Honda, Nissan, Subaru, Mazda, and more. Supporting these manufacturers are large and small suppliers, providing Japan the third-largest automotive industry in the world.

Japan’s automotive market is more dispersed than America’s, both organizationally and geographically. Whereas the US automotive industry is centered around Detroit, the Japanese automotive industry is spread all over the country. This gives the Japanese economy a little bit of a Detroit-like feel; not everybody works in the automotive industry, but a lot of people do.

Localization (in the language sense, not in the lidar sense) is a big challenge for bringing the Udacity Self-Driving Car Nanodegree Program to Japan. English is not widely spoken in the country, but it seems to be more prevalent among software engineers, who need to at least read English to participate in cutting-edge projects and research. So in that sense, Self-Driving Car has an easier time than, say, Udacity’s Introduction to Programming Nanodegree Program.

One thing that really struck me in meeting with Udacity students in Japan is how important the Udacity student network can be. We hosted about 30 Self-Driving Car students in Tokyo, some of whom already worked in the automotive industry and some of whom were trying to break into that field. The students in the field were eager to connect with newcomers, particularly in a relatively small community of Udacity students.

That’s been one of our goals for the program since the beginning, that as Udacity students get jobs working on autonomous vehicles, they’ll want to pull in other Udacity students. It was fun to see that in operation in Tokyo.

Link Roundup

I was traveling last week (more on that soon) and fell way behind on autonomous vehicle news and on my own posts.

Here are some things I missed.

Keras is broadening the deep learning frameworks that it supports. This is actually slightly old news, but pointed out to me recently. We use Keras in the Udacity Self-Driving Car Nanodegree Program.

HERE demos its next generation mapping vehicle. If you’re particularly interested in localization, the vehicle has a fancy LIDAR and DGPS setup.

Waymo is building self-driving trucks. The Google-Uber competition continues.

Ford plays catch up on self-driving car technology. No, wait, another article says Ford is ahead! The truth is Ford itself doesn’t really know for sure, because none of the car companies are releasing metrics in this area. The only group that even has a clue about this, interestingly enough, are the automotive suppliers, since they see what everyone is doing.

Uber fires Anthony Levandowski. In hindsight, this seems inevitable. Although I dislike the way the judge in the Waymo lawsuit basically ran roughshod over Levandowski’s fifth amendment rights.

Yandex is working on a self-driving taxi service. Of course they are.

Literature Review: MultiNet

Today I downloaded the academic paper “MultiNet: Real-Time Joint Semantic Reasoning for Autonomous Driving”, by Teichmann et al, as the say in the academy.

I thought I’d try to summarize it, mostly as an exercise in trying to understand the paper myself.

Background

This paper appears to originate out of the lab of Raquel Urtasun, the University of Toronto professor who just joined Uber ATG. Prior to Uber, Urtasun compiled the KITTI benchmark dataset.

KITTI has a great collection of images for autonomous driving, corresponding leaderboards in various tasks, like visual odometry and object tracking. The MultiNet paper is part of the overall KITTI Lane Detection leaderboard.

Right now, MultiNet sits at 15th place on the leaderboard, but it’s the top entry that’s been formally written up in an academic paper.

Goals

Interestingly, the goal of MultiNet is exactly to win the KITTI Lane Detection competition. Rather, it’s to train a network that can segment the road quickly, in real-time. Adding complexity, the network also detects and classifies vehicles on the road.

¿Por qué no?

Architecture

The MultiNet architecture is three-headed. The beginning of the network is just VGG16, without the three fully connected layers at the end. This part of the network is the “encoder” part of the standard encoder-decoder architecture.

Conceptually, the “CNN Encoder” reduces each input image down to a set of features. Specifically, 512 features, since the output tensor (“Encoded Features”) of the encoder is 39x12x512.

For each region of an input image, this Encoded Features tensor captures a measure of how strongly each of 512 features is represented in that region.

Since this is a neural network, we don’t really know what these features are, and they may not even really be things we can explain. It’s just whatever things the network learns to be important.

The three-headed outputs are more complex.

Classification: Actually, I just lied. This output head is pretty straightforward. The network applies a 1×1 convolution to the encoded features (I’m not totally sure why), then adds a fully connected layer and a softmax function. Easy.

(Update: Several commenters have added helpful explanations of 1×1 convolutional layers. My uncertainty was actually more about why MultiNet adds a 1×1 convolutional layer in this precise place. After chewing on it, though, I think I understand. Basically, the precise features encoded by the encoder sub-network may not be the best match for classification. Instead, the classification output perform best if the shared features are used to build a new set of features that is specifically tuned for classification. The 1×1 convolutional layer transforms the common encoded features into that new set of features specific to classification.)

Detection: This output head is complicated. They say it’s inspired by Yolo and Faster-RCNN, and involves a series of 1×1 convolutions that output a tensor that has bounding box coordinates.

Remember, however, the encoded features only have dimensions 39×12, while the original input image is a whopping 1248×384. Apparently 39×12 winds up being too small to produce accurate bounding boxes. So the network has “rezoom layers” that combine the first pass at bounding boxes with some of the less down-sampled VGG convolutional outputs.

The result is more accurate bounding boxes, but I can’t really say I understand how this works, at least on a first readthrough.

Segmentation: The segmentation output head applies fully-convolutional upsampling layers to blow up the encoded features from 39x12x512 back to the original image size of 1248x312x2.

The “2” at the end is because this head actually outputs a mask, not the original image. The mask is binary and just marks each pixel in the image as “road” or “not road”. This is actually how the network is scored for the KITTI leaderboard.

Training

The paper includes a detailed discussion of loss function and training. The main point that jumped out at me is that there are only 289 training images in the KITTI lane detection training set. So the network is basically relying on transfer learning from VGG.

It’s pretty amazing that any network can score at levels of 90%+ road accuracy, given a training set of only 289 images.

I’m also surprised that the 200,000 training steps don’t result in severe overfitting.

Summary

MultiNet seems like a really neat network, in that it accomplishes several tasks at once, really fast. The writeup is also pretty easy follow, so kudos to them for that.

If you’re so inclined, it might worth downloading the KITTI dataset and trying out some of this on your own.

Upcoming Live Events

I’ve got a few trips planned and I hope to meet current and prospective Udacity Self-Driving Car students along the way.

If you’re neither a current nor a prospective student, but would like to meet for another reason, just send me an email (david.silver@udacity.com).

Japan

It looks like a gathering of Udacity Self-Driving Car students in Tokyo will be happening on Wednesday, May 31, at EGG. More details to come in the #japan channel of the student Slack community, or ping me directly for details.

Washington, DC

I’m heading home to Virginia from June 7th through 11th. Still working on organizing a gathering while I’m there. Send me an email if you’re interested in attending.

Denver

I’ll be in Colorado for a week from late June to early July, and Autonomous Denver has graciously offered to help put an event together. More details to come on this, as well. If you’re interested, join the Autonomous Denver Meetup group, or send me an email directly.

All About Kalman Filters

Here is a collection of Udacity student posts, all about Kalman filters. Kalman filters are a tool that sensor fusion engineers use for self-driving cars.

Imagine you have a radar sensor that tells you another vehicle is 15 meters away, and a laser sensor that says the vehicle is 20 meters away. How do you reconcile those sensor measurements?

That’s what a Kalman filter does.

Udacity Self-Driving Car Nanodegree Project 6 — Extended Kalman Filter

Jeremy Shannon

Jeremy has a really nice post on the intuition behind Kalman filters — why we use them and how they work. Plus the GIF is cool:

“It’s just a cycle of predict (“based on your previous motion, I’d expect you to be here n seconds later”) and measurement update (“but my sensor thinks you’re here”), from which a compromise is made and a new state vector and covariance matrix are determined”

Self-Driving Car Engineer Diary — 8

Andrew Wilkie

Actually, this quick post is as much about the deep neural networks Andrew experimented with between terms as it is about the Kalman filters from the beginning of Term 2. But he does highlight using behavior driven development to build his Kalman filter pipeline, which is awesome:

“I added Behaviour Driven Development (BDD) tests using Catch to my project. While this took extra time to setup, I’ve seen the benefit of developer tests too many times to ignore them, especially when using verbose languages like C++.”

Sensor Fusion and Object Tracking using an Extended Kalman Filter Algorithm — Part 1

Mithi

This post by Mithi is a great place to look if you’re interested in the math behind all of the vectors and matrices that drive the Extended Kalman Filter.

“A lidar can measure distance of a nearby objects that can easily be converted to cartesian coordinates (px, py) . A radar sensor can measure speed within its line of sight (drho)using something called a doppler effect. It can also measure distance of nearby objects that can easily be converted to polar coordinates (rho, phi) but in a lower resolution than lidar.”

Kalman Filter, Extended Kalman Filter, Unscented Kalman Filter

Alena Kastsiukavets

Alena’s breakdown of the differences between Kalman Filters, Extended Kalman Filters, and Unscented Kalman Filters is terrific. Here’s the summary, but there’s a lot more at the link:

“In a case of nonlinear transformation EKF gives good results, and for highly nonlinear transformation it is better to use UKF.”

Kalman Filter: Predict, Measure, Update, Repeat.

Joshua Owoyemi

Joshua’s post takes the Kalman filter from the highest-level intuitions, through the mathematical theory, all the way to the algorithmic implementation.

“Kalman filter algorithm can be roughly organised under the following steps:
1. We make a prediction of a state, based on some previous values and model.
2. We obtain the measurement of that state, from sensor.
3. We update our prediction, based on our errors
4. Repeat.”

Testing and Validating Autonomous Vehicles

University of Michigan researchers just released an exciting but vague (that’s the second time I’ve used that formulation recently) whitepaper on testing and validation for autonomous vehicles.

Testing is one of many open challenges in autonomous vehicle development. There’s no clear consensus on exactly how much testing needs to be done, and how to do it, and how safe is safe enough.

Last year, RAND issues a report estimating that it would be basically impossible to empirically verify the safety of self-driving cars on any reasonable timeframe.

Ding Zhao and Huei Peng, from Michigan, claim to have found a way to reduce by 99.9% the billion-plus miles necessary. The four keys are:

  • Naturalistic Field Operational Tests
  • Test Matrix
  • Worst Case Scenario
  • Monte Carlo Simulation

The paper is light on details, but the approach seems to boil down to: drive dangerous situations again and again on a test track, instead of waiting for the dangerous situations to occur on the road, because that could take forever.

And that all seems smart enough. It’s like practicing three point shots, instead of just mid-range jumpers. Or building new exciting software projects as a way to learn a new computer language, instead of just maintaining legacy code.

But it’s less clear how to get from essentially “focused practice” to “the car is safe enough”. Perhaps another paper is forthcoming that makes that leap.

Mobility and Self-Driving Cars and Ford

Yesterday, Ford parted ways with CEO Mark Fields, and promoted Jim Hackett to the top spot.

Hackett is an interesting person for a lot of reasons. One reason is Hackett’s one year run as head of Ford Smart Mobility, LLC, immediately prior to now.

I’ve seen news outlets reporting that Hackett was head of Ford’s autonomous vehicle program, but that’s not quite right.

Ford Smart Mobility is a mobility-focused subsidiary that looked at combining everything from bikes to van shuttles to trains to autonomous vehicles into a seamless mobility service.

The LLC is more of a small standalone business unit, whereas Ford’s autonomous vehicle team, headed by Randy Visintainer, is housed within Ford Motor Company proper.

This distinction raises the question of which is the key market — self-driving cars, or mobility as a service?

Traditional mobility has been delineated by different companies owning different modes of transportation — the railway company is different than the car company, which is different than the bike or bus company.

Will technology change that in the future? Or is the future pretty much about self-driving taxis, with people using bikes and trains and planes more or less as they always have?

I’m not quite sure. Certainly, as people give up their personal cars and rely on ridesharing, there perspective on other forms of transportation changes. If your self-driving taxi company can’t take you 200 miles to your weekend getaway, and you don’t have your own car, maybe you need a seamless solution. Or maybe you just call Hertz.

It’s not obvious that Ford or Hackett will bank on broad mobility over pure self-driving cars, but it seems like a possibility.