Udacity Students on Deep Learning and Jobs

Want to get a job working on self-driving cars? Read on.

How I Landed My Dream Job Working On Self-driving Cars

Galen Ballew

The guiding star of the Udacity Self-Driving Car Nanodegree Program is to prepare students for jobs working on autonomous vehicles. So we were excited that Galen found his dream job working on autonomous vehicles for HERE in Boulder, Colorado. He also gives lots of credit to the Udacity, which is generous of him 🙂

“The private Slack channel for students is filled with a tangible excitement. I’ve never been a part of a such a large student body, let alone a student body that is committed to the success of every student (no grading curve here). Between Slack, the dedicated forums, and your own private mentor, there is no reason to be stuck on a problem — there are so many people willing to help answer your questions. Instead, you can focus on finding your own way to improve the foundations of the projects.”

Self-driving Cars — Deep neural networks and convolutional neural networks applied to clone driving behavior

Ricardo Zuccolo

Ricardo provides a thorough rundown of his Behavioral Cloning project, which runs on both simulators. He synthesized and built on the insights of earlier Udacity students:

“My first step was to evaluate the driver log steering histograms for 100 bins and do all required transformation, drop and augmentation to balance it. Here I followed the same methodology as in the well explained pre-processing from Mez Gebre, thanks Mez!”

Behavioral Cloning: Tiny Mistake Cost Me 15 days

Yazeed Alrubyli

Yazeed was struggling with the Behavioral Cloning Project when he realized he mixed up his colorspaces. Students on Slack pointed out that this might have been because I mixed up the colorspaces in some demo code. Oops!

“I spend about 15 days training my network over and over in the third project of the Self-Driving Cars Engineer NanoDegree by Udacity, it drives me crazy because it can’t keep itself on the road. I reviewed the code hundreds of times and nothing wrong with it. Telling you the truth, when I almost gave up, I read an article that has nothing to do with this project. It mentions that OpenCV read images as BGR (Blue, Green, Red) and guess what, the drive.py uses RGB. What The Hell I’m doing for 2 weeks !!!”

What i have learned from the first term of Udacity Self driving car nanodegree program.

Hadi N. Abu-Snineh

Hadi went back and reviewed everything he learned during Term 1. He included all of his project videos, which are awesome!

“The last and fifth project of the first term is to write a program to detect vehicles by drawing a bounding box on each detected vehicle. the project is done using a Support Vector Machine SVM that is a kind of classifier that is used to classify and differentiate between different classes. In this case, the classifier takes multiple features of images as inputs and learns to classify them into two classes, cars and non cars.”

Generating Faces with Deep Convolutional GANs

Dominic Monn

Dominic wrote up the generative adversarial network that he trained for the Udacity Deep Learning Nanodegree Foundations Program. Normally I downvote blog posts from other programs, but Dominic is a Self-Driving Car student and DLFND is great and GANs are very cool, so I’ll let it slide.

“The training took approximately 15–20 minutes and even though the first few iterations looked like something demonic — the end result was fine.”

Udacity at NVIDIA GTC

Udacity will be at NVIDIA’s GPU Technology Conference next week in San Jose!

If you’ll be there, please stop by to say hello. There will be a car display, plus instructors and students talking about the Self-Driving Car Nanodegree Program.

Also, I’ll be presenting at 4:30pm.

There are still tickets left to the conference if you’d like to register! If you’re a Udacity student, email me (david.silver@udacity.com) for the student discount code.

Try the Self-Driving Car Nanodegree Program for Free!

Applications close on Monday for the next cohort of the Udacity Self-Driving Car Engineer Nanodegree Program.

So between now and Monday, we have opened up the very first module of the program for free!

Visit this link to try it out: https://classroom.udacity.com/courses/ud013-preview

You can learn about how self-driving cars work at a high-level, as well as dive into a mini-lesson about using OpenCV to find lane lines on the road.

Have fun, and apply to join us!

Q&A with Sebastian Thrun

Sebastian Thrun took 20 questions from students around the world about self-driving cars, the Nanodegree Program, and his thoughts about the future.

Watch what the winner of the DARPA Grand Challenge, the founder of the Google Self-Driving Car Project, and the President of Udacity has to say!

Also, Monday is the last day for application for the upcoming cohort of the Udacity Self-Driving Car Nanodegree Program. Apply now!

Learn from these 5 Udacity Students!

Here are posts from five Udacity Self-Driving Car students, sharing what they’ve learned about the program, their projects, Docker, and even how to hack your own car!

Review del Nano Degree de Udacity sobre conducción autónoma

Andres

Andrew provides the most comprehensive review (in Spanish) of the Self-Driving Car Nanodegree Program that I have seen yet. He covers the forums, the mentors, the hiring partners, the classes, and all of the projects. It’s a very positive review, which is flattering:

“Clonacion de Comportamiento es uno de los proyectos con los que más he aprendido nunca y con los que he visto la potencia pero también la dificultad de diseñar y entrenar redes neuronales. Merece la pena hacer el nano-degree sólo por este proyecto.”

Starting Udacity Self Driving Car Nanodegree with Docker

Gungor Basa

Gungor provides a concise tutorial for students looking to spin up Docker for the Self-Driving Car Nanodegree Program:

“I just realized there are still a lot of people having problem with Docker and starter kit for Self Driving Car Nanodegree program. In this post, I will give you a step by step guide.”

Making A Virtual Self-Driving Car

Muddassir Ahmed

Muddassir covers some really cool data augmentation he performed on his Behavioral Cloning training set. By the end, his network is able to drive multiple laps around the crazy jungle track!

“I used a python generator in order to feed training batches to the network. The generator I designed also augments the data before generating a batch. I apply different types of augmentation to the data such as varying the brightness, color saturation, adding random shadows, translations, and horizontal flips to the images.”

Vision Needed

Harish Vadlamani

Harish reflects on the challenges of the program, what was awesome about it, and what we need to improve:

“During this period, I have spent many all-nighters chugging down on redbull and coffee in an attempt to consume enough caffeine to stick it out and get through the various hurdles the course throws at you. I have on multiple ocassions spent days working on an idea only to get so frustrated with the results and progress to go ahead and scrap it entirely. Only to later realize that I had been right all along but made a tiny error in executing it!

In hindsight(*spoiler alert), it was worth all the trouble!”

Hacking my own car: Lessons learnt after a few months of setbacks.

Ariel Nuñez (autti)

Ariel is building a self-driving car from scratch, and has learned all sorts of practical lessons. This is a great list to read if you want to learn from somebody who is hacking a car:

“Can buses are good, dual can buses are great. They allow you to be able to separate key traffic in order to be able to ‘replace’ a factory module like the LKAS or the ACC. Get an Arduino due with dual can. ($70)”

Voyage

Yesterday Udacity announced that my colleague, Oliver Cameron, is spinning out his own autonomous vehicle company, Voyage.

Friends have texted to ask if that means I’m now part of Voyage, and the answer is no.

I’m staying at Udacity to build the Self-Driving Car Engineer Nanodegree Program, which has thousands of students and is a lot of fun. We’ve launched modules on Deep Learning, Computer Vision, Sensor Fusion, and Localization, with development underway on Control, Path Planning, System Integration, plus several elective modules.

If you’re reading this, you really should sign up for the program 😉

Oliver recruited me to Udacity, gave me lots of room to run, and has been a driving force in building the company for the last three years. While I wish him the best, it’s sad to see him go.

But Voyage is its own independent company, so this won’t affect Udacity’s mission to place our students in jobs with our many amazing hiring partners, like Didi, Mercedes-Benz, NVIDIA, Uber ATG, and many more.

Working at NVIDIA

One of my favorite parts of the Udacity Self-Driving Car Engineer Nanodegree Program is the tremendous partners that are supporting us in training autonomous vehicle engineers.

The very first partner to sign up was NVIDIA. The NVIDIA team is super-excited about the Udacity Nanodegree Program and is actively interviewing students in the program, even before they graduate.

If you’d like to learn more about how NVIDIA drives autonomous vehicle technology, watch the video we made with them:

6 Awesome Projects from Udacity Students (and 1 Awesome Thinkpiece)

Udacity students are constantly impressing us with their skill, ingenuity, and their knowledge of the most obscure features in Slack.

Here are 6 blog posts that will astound you, and 1 think-piece that will blow your mind.

How to identify a Traffic Sign using Machine Learning !!

Sujay Babruwad

Sujay’s managed his data in a few clever ways for the traffic sign classifier project. First, he converted all of his images to grayscale. Then he skewed and augmented them. Finally, he balanced the data set. The result:

“The validation accuracy attained 98.2% on the validation set and the test accuracy was about 94.7%”

Udacity Advance Lane Finding Notes

A Nguyen

An’s post is a great step-through of how to use OpenCV to find lane lines on the road. It includes lots of code samples!

“Project summary:
– Applying calibration on all chessboard images that are taken from the same camera recording the driving to obtain distort coefficients and matrix.
– Applying perspective transform and warp image to obtain bird-eyes view on road.
– Applying binary threshold by combining derivative x & y, magnitude, direction and S channel.
– Reduce noise and locate left & right lanes by histogram data.
– Draw line lanes over the image”

P5: Vehicle Detection with Linear SVC classification

Rana Khalil

Rana’s video shows the amazing results that are achievable with Support Vector Classifiers. Look at how well the bounding boxes track the other vehicles on the highway!

Updated! My 99.40% solution to Udacity Nanodegree project P2 (Traffic Sign Classification)

Cherkeng Heng

Cherkeng’s approach to the Traffic Sign Classification Project was based on an academic paper that uses “dense blocks” of convolutional layers to fit the training data tightly. He also uses several clever data augmentation techniques to prevent overfitting. Here’s how that works out:

“The new network is smaller with test accuracy of 99.40% and MAC (multiply–accumulate operation counts) of 27.0 million.”

Advanced Lane Line Project

Arnaldo Gunzi

Arnaldo has a thorough walk-through of the Udacity Advanced Lane Finding Project. If you want to know how to use computer vision to find lane lines on the road, this is a perfect guide!

“1 Camera calibration
2 Color and gradient threshold
3 Birds eye view
4 Lane detection and fit
5 Curvature of lanes and vehicle position with respect to center
6 Warp back and display information
7 Sanity check
8 Video”

Build a Deep Learning Rig for $800

Nick Condo

I love this how-to post that lists all the components for a mid-line deep learning rig. Not too cheap, not too expensive. Just right.

Here’s how it does:

“As you can see above, my new machine (labeled “DL Rig”) is the clear winner. It performed this task more than 24 times faster than my MacBook Pro, and almost twice as fast as the AWS p2.large instance. Needless to say, I’m very happy with what I was able to get for the price.”

How Gig Economy Startups Will Replace Jobs with Robots

Caleb Kirksey

Companies like Uber and Lyft and Seamless and Fiverr and Upwork facilitate armies of independent contractors who work “gigs” on their own time, for as much money as they want, but without the structure of traditional employment.

Caleb makes the point that, for all the press the gig economy gets, the end might be in sight. Many of these gigs might soon be replaced by computers and robots. He illustrates this point with his colleague, Eric, who works as a safety driver for the autonomous vehicle startup Auro Robotics. Auro’s whole mission is to eliminate Eric’s job!

“Don’t feel too bad for Eric though. He’s become skilled with hardware and robotics. His experience working in cooperation with a robot can enable him to build better systems that don’t need explicit instructions.”

6 Different End-to-End Neural Networks

One of the highlights of the Udacity Self-Driving Car Engineer Nanodegree Program is the Behavioral Cloning Project.

In this project, each student uses the Udacity Simulator to drive a car around a track and record training data. Students use the data to train a neural network to drive the car autonomously. This is the same problem that world-class autonomous vehicle engineering teams are working on with real cars!

There are so many ways to tackle this problem. Here are six approaches that different Udacity students took.

Self-Driving Car Engineer Diary — 5

Andrew Wilkie

Andrew’s post highlights the differences between the Keras neural network framework and the TensorFlow framework. In particular, Andrew mentions how much he likes Keras:

“We were introduced to Keras and I almost cried tears of joy. This is the official high-level library for TensorFlow and takes much of the pain out of creating neural networks. I quickly added Keras (and Pandas) to my Deep Learning Pipeline.”

Self-Driving Car Simulator — Behavioral Cloning (P3)

Jean-Marc Beaujour

Jean-Marc used extensive data augmentation to improve his model’s performance. In particular, he used images from offset cameras to create “synthetic cross-track error”. He built a small model-predictive controller to correct for this and train the model:

“A synthetic cross-track error is generated by using the images of the left and of the right camera. In the sketch below, s is the steering angle and C and L are the position of the center and left camera respectively. When the image of the left camera is used, it implies that the center of the car is at the position L. In order to recover its position, the car would need to have a steering angle s’ larger than s:

tan(s’) = tan(s) + (LC)/h”

Behavioral Cloning — Transfer Learning with Feature Extraction

Alena Kastsiukavets

Alena used transfer learning to build her end-to-end driving model on the shoulders of a famous neural network called VGG. Her approach worked great. Transfer learning is a really advanced technique and it’s exciting to see Alena succeed with it:

I have chosen VGG16 as a base model for feature extraction. It has good performance and at the same time quite simple. Moreover it has something in common with popular NVidia and comma.ai models. At the same time use of VGG16 means you have to work with color images and minimal image size is 48×48.

Introduction to Udacity Self-Driving Car Simulator

Naoki Shibuya

The Behavioral Cloning Project utilizes the open-source Udacity Self-Driving Car Simulator. In this post, Naoki introduces the simulator and dives into the source code. Follow Naoki’s instructions and build a new track for us!

“If you want to modify the scenes in the simulator, you’ll need to deep dive into the Unity projects and rebuild the project to generate a new executable file.”

MainSqueeze: The 52 parameter model that drives in the Udacity simulator

Mez Gebre

In this post, Mez explains the implementation of SqueezeNet for the Behavioral Cloning Project. This is smallest network I’ve seen yet for this project. Only 52 parameters!

“With a squeeze net you get three additional hyperparameters that are used to generate the fire module:

1: Number of 1×1 kernels to use in the squeeze layer within the fire module

2: Number of 1×1 kernels to use in the expand layer within the fire module

3: Number of 3×3 kernels to use in the expand layer within the fire module”

GTA V Behavioral Cloning 2

Renato Gasoto

Renato ported his behavioral cloning network to Grand Theft Auto V. How cool is that?!

How Udacity’s Self-Driving Car Students Approach Behavioral Cloning

Udacity believes in project-based education. Our founder, Sebastian Thrun, likes to say that you don’t lose weight by watching other people exercise. You have to write the code yourself!

Every module in the Udacity Self-Driving Car Engineer Nanodegree Program builds up to a final project. The Deep Learning Module culminates in one of my favorite—Behavioral Cloning.

The goal of this project is for students to build a neural network that “learns” how to drive a car like a human. Here’s how it works:

First, each student records his or her own driving behavior by driving the car around a test track in the Udacity simulator.

Then, each student uses this data to train a neural network to drive the car around the track autonomously.

There are all sorts of neat ways to approach this problem, and it seems like Udacity students tried all of them! Here are excerpts from—and links to—blog posts written by five of our Self-Driving Car students, each of whom takes a different approach to the project.

Training a Self-Driving Car via Deep Learning

James Jackson

James Jackson’s post is a great overview of how to approach this project, and he adds a twist by implementing data smoothing. We didn’t cover data smoothing in the instructional material, so this is one of many examples of Udacity students going above and beyond the instructional material to build terrific projects.

“Recorded driving data contains substantial noise. Also, there is a large variation in throttle and speed at various instances. Smoothing steering angles (ex. SciPy Butterworth filter), and normalizing steering angles based on throttle/speed, are both investigated.”

Behavioral Cloning

JC Li

This is a terrific post about the mechanics of building a behavioral cloning model. It really stands out for JC’s investigation of Gradient Activation Mappings to show how which pixels in an image have the most effect on the model’s output.

“The whole idea is to using heatmap to highlight locality areas contributing most to the final decision. It was designed for classification purpose, but with slight change, it can be applied to our steering angle predictions.”

Behavioural Cloning Applied to Self-Driving Car on a Simulated Track

Joshua Owoyemi

This post has a great discussion of data augmentation techniques for neural network training, including randomly jittering data from the training set. Joshua used over 100,000 images for training!

“Though there was more than 100,000 training data, each epoch consisted of 24,064 samples. This made the training more tractable, and since we were using a generator, all of the training data was still used in training, however at different epochs.”

Self Driving Car — Technology drives the Future !!

Sujay Babruwad

Sujay applied a number of different augmentations to his training data, including brightness and shadow augmentations. This helped his model generalize to a new, darker test track.

“The training samples brightness are randomly changed so as to have training data that closely represent various lighting conditions like night, cloudy, evening, etc.”

You don’t need lots of data! (Udacity Behavioral Cloning)

A Nguyen

This post encourages students by showing how it’s possible to build a behavioral cloning model without tens of thousands of training images. The secret is to use side cameras and data augmentation.

“Just like anything we do, the longer we practice, the better we are good at it because we take in hour and hour of data into our brain memory/muscle memory. It’s the same here for neural net, the more variety of data you have to train your network, the better the model is at the task.”


As you can see from these examples, there is no one right way to approach a project like this, and there is a great deal of room for creativity. What should also be clear is that our students are incredible!

We’re very excited about the next projects on the horizon, and we look forward to sharing more amazing student work with you soon!