Independent Self-Driving Car Projects

Hiring partners tell us all the time that they want candidates who are excited about the field of autonomous vehicles. That’s part of what makes the Udacity Self-Driving Car Engineer Nanodegree Program so impressive — students from around the world have sought out the program in order to learn about the field.

In addition to the twelve different projects students must pass to earn the Nanodegree credential, many of our students go even further and build independent projects of their own.

Here are a few projects that different students have undertaken. Maybe they can inspire you to build your own independent project!

Lane Detection with Deep Learning (Part 1)

Michael Virgo

Michael is a student in both the Udacity Self-Driving Car Nanodegree Program and also the Udacity Machine Learning Nanodegree Program. For his MLND capstone project, he built a neural network to detect lanes on the road.

This blog post is a two-part series. Part 1 is all about collecting and labeling data, which is a major task in any machine learning project. In case the suspense is killing you, here’s Part 2, in which Michael uses convolutional layer visualization, transfer learning, and finally a segmentation network, to build a lane-finding model.

Building Self-Driving RC Car Series #1 — Equipment & Plan

Yazeed Alrubyli

For anybody who is interested in building their own mini self-driving car, Yazeed has put together a five-part series on how he built his. Part 1: Equipment & Plan. Part 2: Hardware Setup. Part 3: Manual Control Using Raspberry Pi & Python. Part 4: Everything In Place. Part 5: Serverless Control Using Computer Vision 🙂

Building a Bayesian deep learning classifier

Kyle Dorman

Kyle wrote up a deep and detailed blog post about modifying deep neural networks to incorporate uncertainty. Uncertainty is a core component of Bayesian logic, and we use uncertainty is algorithms like Kalman filters, which are crucial for fusing data from multiple sensors. Kyle follows guidance from the machine learning group at Cambridge University to compare differences in softmax activation functions and ultimately develop a confidence measure for classification values.

Build your own self driving (toy) car

Bogdan Djukic

Bogdan constructed his own mini self-driving car using the Donkey hardware, but then built his own software stack. He got ROS running on a Raspberry Pi (!!) and trained a behavioral cloning neural network.

HomographyNet: Deep Image Homography Estimation

Mez Gebre

Mez implemented a paper from the team at Magic Leap for implementing homography with deep learning. Homography is the mapping of two different perspectives onto each other. So if you take a photo of a statue from the north side, and one from the south side, can you tell that it’s the same statue and can you figure out how to generate an image from the east or west side? Magic Leap is a virtual reality company, and you can see why this would be an important skill in virtual worlds.

me Convention in September

I’ll be in Frankfurt, Germany, this September at the me Convention, which is an event hosted by our partners at Mercedes-Benz concurrent with the enormous International Motor Show there.

I’m excited to be on a panel discussing Teaching Machines to Drive Like Humans. I suppose at this point my expertise is more on Teaching Humans to Teach Machines to Drive Like Humans, but I’ll try to add value anyway.

The me Convention runs from September 15th to 17th, and the International Motor Show runs ten days, from September 14th to 24th.

This will be my first visit to Germany and I’m looking forward to meeting people!

If you’re a current or perspective student in the Udacity Self-Driving Car Nanodegree Program and you’ll be in the area, let me know in the comments or by email (david.silver@udacity.com). We’re planning to organize an event or multiple events for Udacity students while I’m in Europe.

And if you’ll be there and you’d like to hire Udacity students, send me an email (david.silver@udacity.com) and I’d love to meet you, too!

Lyft Builds a Self-Driving Car Team

A few weeks ago, I wrote about Lyft’s strategy of using their ride-sharing network as a platform for other companies’ autonomous vehicles, and the contrast this strategy drew with Uber, which is developing its own AVs. It seemed like Lyft’s strategy was playing out nicely.

Maybe Lyft thinks otherwise now.

The company announced a new autonomous vehicle team that plans to scale up to several hundred engineers. Exactly what this team is going to do is not totally clear — whether it’s building an actual autonomous vehicle or simply providing supporting infrastructure for other companies vehicles.

But it’s more evidence that everybody wants to, and feels the need to, develop their own autonomous vehicles.

Discovery Week at Udacity

This week is Discovery Week at Udacity. If you apply to the Self-Driving Car Nanodegree Program this week (and get accepted and then enroll), you’ll save $200!

For our subscription Nanodegree Programs, like Machine Learning, Data Analyst, and Full-Stack Web Developer, you’ll save 50% off the first two months (which also equals $200).

If you’ve been curious about how the Udacity education system works, this week is a great time to give it a shot.

Udacity Students Review the Self-Driving Car Program

Part of what makes Udacity special is how seriously we take student feedback, and I think how transparent we are about it. For the Self-Driving Car Nanodegree Program, we solicit student ratings at the end of every lesson, we talk with students in our Slack community, and we have a Waffle on which students report issues for us to address.

Students are our partners in building the world’s best autonomous vehicle educational program, and we’re always eager to learn what they think.

With that in mind, here are four reviews of the Udacity Self-Driving Car Engineer Nanodegree Program.

At the end of Term 1 — Udacity’s SDCND

Vishal Rangras

Vishal collects all of his Term 1 projects and reviews the key topics for each, as well as some of his results from the program so far. Great music underscoring his YouTube videos!

Milestones achieved so far:

Successfully completed Term 1

Got two interview calls for SDC job profile based on the skills learnt in the program

Started a small Artificial Intelligence Community in my current organization to share knowledge and make developers aware of these cutting-edge technologies.

Will commence Term 2 in the month of July.

A Review of Udacity’s Self-Driving Car Engineer Nanodegree — Second Term

Mithi

Mithi walks through each module of Term 2 and describes the good (the material is rigorous and exciting!), the bad (C++ is hard), and the ugly (there are still some issues we need to fix). This type of enthusiastic and constructive critique is super-valuable to us in improving the Nanodegree Program.

Some people think that the first term had more material covered than this term and that you don’t need as many hours per week. I personally think I spent significantly more hours in this term than last term. Maybe it’s because I’m not as experienced in C++ and I didn’t do the `bonus challenges` of last term.

Udacity’s Self-Driving Car Nanodegree — Term 1

Darien Martinez

Darien provides an interesting perspective as a student who has been working with signal processing for many years. He is really impressed at the image processing power provided by newer tools like Keras, TensorFlow, and OpenCV. It’s a lot of fun for us to read about students who enjoy the material this much.

This course was a lot of fun. Multiple new techniques explained and understood… to some extend at least for me. This is just the tip of the iceberg on this field. It was a great experience, and I am looking forward to next term starting next week. It was a lot of work(more than the 10 hours per week forecasted by Udacity) but it worth every cent.

Blog posts by Udacity’s Self-Driving Car students

Frank Kanis

This isn’t precisely a “review” of the Self-Driving Car program, but Frank has put together a comprehensive list of student blog posts about each project in the Nanodegree Program. If you’re interested in reading about how different students approached a project, check it out!

Self-Driving Cars the World Over

As self-driving cars move closer and closer to reality, we’re seeing more and more places in the world that people are working on them.

Some of these efforts are big. Some are small but growing. More will come.

It’s an exciting time to be in the business.

Level 3: The Audi A8

Audi has announced Level 3 autonomous driving functionality in the upcoming 2018 A8 model. This would make Audi the first car manufacturer ever to release a Level 3 vehicle.

As a brief recap, the Society of Automotive Engineers publishes five autonomy levels.

Level 1 — Driver Assistance: The driving mode-specific execution by a driver assistance system of either steering or acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task

Level 2 — Partial Automation: The driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task

Level 3 — Conditional Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene

Level 4 — High Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene

Level 5 — Full Automation: The full-time performance by an Automated Driving System of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver

The controversial phrase in the Level 3 definition is:

“with the expectation that the human driver will respond appropriately to a request to intervene”

Some companies — most notably Google and Ford — contend that it’s not realistic to tell human drivers that they can divert their attention and then expect them to intervene quickly enough to avert an accident.

Audi seems more confident about human drivers, although they are rolling their system out slowly, presumably in an effort to better test and verify the car and the drivers.

Full Level 3 autonomous driving will be limited to divided highway scenarios at under 60 kmh (~35 mph). Basically, traffic jam driving. Which is the worst type of driving, so I look forward to the day when the computer takes that over in my own car.

The 2018 Audi A8 isn’t actually on the market yet, although it should be soon, and it’s price will start at 90,600 euro (~US$103,000). Definitely a luxury vehicle, and an exciting one.

SynCity Simulator

My colleague Aaron pointed me toward a YouTube video for what looks like a pretty awesome photorealisic simulator called SynCity. It’s built by the artificial intelligence company CVEDIA, in Holland.

The photorealism of the simulator takes it to a whole new level beyond comparable simulators I’ve seen previously. At least what’s shown in the YouTube video 😉

This is the dream of autonomous vehicle simulators — that we’ll be able to take data derived from the simulator and transfer it to the real world. Particularly for computer vision, the closer the simulator looks to reality, the more likely that is.

CVEDIA appears to be previewing the simulator now, and I’m not sure when it will hit production release. Keep an eye out.

NDT Matching

In the final project of the Udacity Self-Driving Car Nanodegree Program students build code to drive Udacity’s very own self-driving car.

As with almost any type of computer programming, however, we’re not starting from scratch. There are existing operating systems and middleware and libraries that students will get to build on to drive the car.

One of these libraries is Autoware, which is an open-source self-driving car library maintained by Tier IV. We use Autoware particularly for its localization functions, which use our lidar data and a high-definition lidar map to figure out where our vehicle is in the world.

The specific localization algorithm that Autoware uses is called normal distributions transform (NDT) matching, which was originally developed by Peter Biber at the University of Tubingen. NDT is a little different than the particle filter localization we’ve worked with previously, so I’ve spent time over the last few days reviewing how it works.

Localization

In order to figure out where we are in the world, we’ll probably use a map. There’s a whole branch of localization called simultaneous localization and mapping (SLAM), where we figure out how to navigate without a map, but that’s difficult. It’s easier just to have a map and so we’ll assume we have one.

This is a lidar point cloud map of the Udacity parking lot. Tilted on an angle.

In order to figure out where we are in the world, we take our own lidar scan and compare what we see to this map. You can basically imagine that we line up points and try to figure out, given what our current laser scan shows, where are we in this map?

One problem: our points will probably be a little off from the map. Measurement errors will cause points to be slightly mis-aligned, plus the world might change a little between when we record the map and when we make our new scan.

NDT matching provides a solution for these minor errors. Instead of trying to match points from our current scan to point on the map, we try to match points from our current scan to a grid of probability functions created from the map.

A probability density function.

We break the point cloud map into three-dimensional boxes essentially assign a probability distribution to each box. The image above is actually a 2D probability function, but we can make a 3D function following the same principles.

This way, if we detect a point a few millimeters away from where the map thinks a point should be, instead of being completely unable to match those two points, our NDT matching function connects our detected point to the probability function on the map. There’s a kind of “near match”.

For anybody who’s taken Udacity’s lessons on particle filters, or studied them elsewhere, there is a whole separate issue of monte carlo randomization that particle filters use. It seems like that could be applied to NDT matching in pretty much the same fashion, and indeed there is a paper called “Normal distributions transform Monte-Carlo localization (NDT-MCL)” by Saarinen, et al. that seems to work out the details, although I haven’t gone through that in detail.