Udacity Students on Deep Learning and Jobs

Want to get a job working on self-driving cars? Read on.

How I Landed My Dream Job Working On Self-driving Cars

Galen Ballew

The guiding star of the Udacity Self-Driving Car Nanodegree Program is to prepare students for jobs working on autonomous vehicles. So we were excited that Galen found his dream job working on autonomous vehicles for HERE in Boulder, Colorado. He also gives lots of credit to the Udacity, which is generous of him 🙂

“The private Slack channel for students is filled with a tangible excitement. I’ve never been a part of a such a large student body, let alone a student body that is committed to the success of every student (no grading curve here). Between Slack, the dedicated forums, and your own private mentor, there is no reason to be stuck on a problem — there are so many people willing to help answer your questions. Instead, you can focus on finding your own way to improve the foundations of the projects.”

Self-driving Cars — Deep neural networks and convolutional neural networks applied to clone driving behavior

Ricardo Zuccolo

Ricardo provides a thorough rundown of his Behavioral Cloning project, which runs on both simulators. He synthesized and built on the insights of earlier Udacity students:

“My first step was to evaluate the driver log steering histograms for 100 bins and do all required transformation, drop and augmentation to balance it. Here I followed the same methodology as in the well explained pre-processing from Mez Gebre, thanks Mez!”

Behavioral Cloning: Tiny Mistake Cost Me 15 days

Yazeed Alrubyli

Yazeed was struggling with the Behavioral Cloning Project when he realized he mixed up his colorspaces. Students on Slack pointed out that this might have been because I mixed up the colorspaces in some demo code. Oops!

“I spend about 15 days training my network over and over in the third project of the Self-Driving Cars Engineer NanoDegree by Udacity, it drives me crazy because it can’t keep itself on the road. I reviewed the code hundreds of times and nothing wrong with it. Telling you the truth, when I almost gave up, I read an article that has nothing to do with this project. It mentions that OpenCV read images as BGR (Blue, Green, Red) and guess what, the drive.py uses RGB. What The Hell I’m doing for 2 weeks !!!”

What i have learned from the first term of Udacity Self driving car nanodegree program.

Hadi N. Abu-Snineh

Hadi went back and reviewed everything he learned during Term 1. He included all of his project videos, which are awesome!

“The last and fifth project of the first term is to write a program to detect vehicles by drawing a bounding box on each detected vehicle. the project is done using a Support Vector Machine SVM that is a kind of classifier that is used to classify and differentiate between different classes. In this case, the classifier takes multiple features of images as inputs and learns to classify them into two classes, cars and non cars.”

Generating Faces with Deep Convolutional GANs

Dominic Monn

Dominic wrote up the generative adversarial network that he trained for the Udacity Deep Learning Nanodegree Foundations Program. Normally I downvote blog posts from other programs, but Dominic is a Self-Driving Car student and DLFND is great and GANs are very cool, so I’ll let it slide.

“The training took approximately 15–20 minutes and even though the first few iterations looked like something demonic — the end result was fine.”

What Is Waymo Doing with Lyft?

Lyft and Waymo just announced an exciting but vague self-driving car partnership that further complicates the mobility ecosystem.

According to The New York Times:

“Details about the deal between Waymo and Lyft were scant. The companies declined to comment on what types of products would be brought to market as a result of it or when the public might see the fruits of the collaboration.”

Basically, there’s a deal but nobody knows what’s in it.

The big tease, of course, is that Waymo would put its self-driving cars on Lyft’s network.

That would be totally awesome, but somewhat at odds with past indications of Waymo’s plans for commercializing its self-driving technology.

Previously, Waymo has appeared to be launching a ride-sharing service, then a lidar manufacturing business, and now possibly a self-driving car operation (without owning the actual ride-sharing service). At other points, people have speculated that Waymo might try to make its software the default tech stack of autonomous vehicles, similar to what Android has become for mobile phones.

Or maybe they’ll buy Lyft.

Jensen’s GTC Keynote

NVIDIA CEO Jensen Huang is famous for his keynote addresses at the company’s GPU Technology Conference.

Jensen delivered this year’s keynote yesterday, and the focus was deep learning and artificial intelligence. NVIDIA GPUs are critical for training deep neural networks, so NVIDIA is fast becoming as much an AI company as a gaming company.

Engadget created a super-fast mashup of Jensen’s keynote, if you only have 13 minutes to catch the highlights. In particular, check out the autonomous vehicle announcements around NVIDIA Drive and Guardian Angel.

Computer Vision and Deep Learning Walkthroughs

Here are some thorough walkthroughs of how to implement lane-finding and end-to-end learning, with all sorts of corner cases.

Want to be like these all stars? Join the Udacity Self-Driving Car Nanodegree Program!

Self-driving Cars — Advanced computer vision with OpenCV, finding lane lines

Ricardo Zuccolo

Ricardo’s lane-finding pipeline works amazing well on challenge video for the Advanced Lane Finding Project. He has an incredibly thorough rundown of his pipeline: calibration, undistortion, color transforms, perspective transform, lane detection, curvature, and unwarping:

“Once you know where the lines are in one frame of video, you can do a highly targeted search for them in the next frame. This is equivalent to using a customized region of interest for each frame of video, which helps to track the lanes through sharp curves and tricky conditions.”

Deep Learning/Gaming Build with NVIDIA Titan Xp and MacBook Pro with Thunderbolt2

Yazeed Alrubyli

I just met Yazeed yesterday at NVIDIA’s GPU Technology Conference, and then I found his blog post today. He loves GPUs and deep learning so much he flew all the way from Saudi Arabia for the conference!

“For me it was a gambling to buy 1200$ Titan Xp which is just relased 17 houres ago — when I bought it — with a promise from NVIDIA to support macOS and I don’t have the Thunderbolt3 port which is the supported port for eGPU. So, I said like Richard Branson said “Screw It, Let’s Do It” and it works like a charm. without further ado, let’s dive in.”

SqueezeDet: Deep Learning for Object Detection

Mez Gebre

A while back Mez published his results on behavioral coning with SqueezeNet, but he’s back with a super-enthusiastic blog post on his network. Only 52 paramters and 6 second epochs on a CPU!

“One good rule of thumb I developed from this project is to try and reduce the number of variables you are tuning to gain better results faster.”

Behavioral Cloning

Arsen Memtov

Arsen has a great writeup on using a neural network to calculate both steering and throttle values for the Behavioral Cloning Project. Also, he uses early stopping to prevent overfitting the data.

“The validation set helped determine if the model was over or under fitting. I used EarlyStopping (utils.py line 299) to stop training when validation mse has stopped improving.”

CarND Behavioral Cloning

JC Li

JC’s writeup of his Behavioral Cloning Project covers a really important topic — how to debug, or at least visualize, what’s going on inside a neural network.

“During the process of training, I felt very uneasy as it is almost like a blackbox. Whenever the model failed to proceed at a certain spot, it is very hard to tell what went wrong. Although my model passed both tracks, the process of try and error and meddle around with different combination of configurations is quite frustrating.”

Tuesday Autonomous Vehicle Links

Delphi is splitting in two. One half will focus on powertrain and other traditional bread and butter automotive components. The other half will focus on software and electronics. Looks like Delphi sees software eating the automotive supply chain.

Mobileye banks on mapping revenue. There have been very few companies able to supply the high-definition maps that autonomous vehicles need. Mobileye plans to be one of them.

Tesla will upload video from consumer cars. I’m surprised this wasn’t already the case. Don’t use your Tesla to break the law. Or, if you do, cover up the camera.

K-City will dwarf M-City. Korea is building an urban testing environment for self-driving cars. It will be almost three times bigger than the environment that Ford and the University of Michigan built.

Uber ATG to open an office in Canada. This will be of interest to Udacity’s Canadian students.

Autonomous Vehicle Operator School

Last week I traveled with colleagues to Sonoma Raceway for Safe Driver Training, a mandatory class for autonomous vehicle operators.

The class itself is not oriented around autonomous vehicles, but rather how to anticipate and evade dangerous situations on the road. The logic of requiring this class of autonomous vehicle operators, I suppose, is that if you have to take over the vehicle in an emergency, hopefully you are able to anticipate and evade a collision.

The biggest lessons are to look as far ahead as possible. Sit lower in the vehicle and raise your eyes toward the horizon. Then, when performing an evasive maneuver, lock your eyes on where you want the vehicle to go. Or, as the instructors say, “Keep your eyes on safety.”

The class was a lot of fun. Several of the exercises involved negotiating tight turns at high speed, just like if an obstacle popped out at the last minute. Other exercises required us to spin out the vehicle in a tight turn, then regain control and proceed through a gate.

Here’s a practice run for a tight turn exercises — the procedure gets tougher when they don’t tell you which way to turn until the last second:

Since the program is held at Sonoma Raceway, there all sorts of cool racecars around.

We did the program in Chevy Cruzes 🙂

Autonomous Vehicles Hurt Berkshire in Two Ways

Nearly all of my savings are in various index funds, but I do own stock in one, single individual company: Berkshire Hathaway.

It’s mostly for sentimental reasons. I went to Omaha a couple of times during business school: once for the Berkshire annual conference (“Woodstock for Capitalists”) and once to meet the Oracle himself, as part of a school trip.

I’ve known for a while that autonomous vehicles would hurt insurance, which is one big part of Berkshire’s business. The logic is that insurance companies only exist because drivers need to insure themselves against the costs of accidents. If accidents diminish, the need for insurance diminishes.

But a question at this year’s annual meeting pointed out that another big part of Berkshire’s business is highly vulnerable to autonomous vehicles: railroads.

Berkshire purchased the Burlington Northern Santa Fe (BNSF) railroad for $26.5 million in 2010 and it’s been a good investment.

That investment will come under intense pressure from self-driving trucks, however. Once trucks can operate nearly constantly, without the cost or physical limitations of a driver, the cost advantage of transportation by rail will diminish, or maybe even disappear completely.

Here’s Buffett’s exchange on this question, honest as ever.

Udacity at NVIDIA GTC

Udacity will be at NVIDIA’s GPU Technology Conference next week in San Jose!

If you’ll be there, please stop by to say hello. There will be a car display, plus instructors and students talking about the Self-Driving Car Nanodegree Program.

Also, I’ll be presenting at 4:30pm.

There are still tickets left to the conference if you’d like to register! If you’re a Udacity student, email me (david.silver@udacity.com) for the student discount code.

Udacity Students Past, Present, and Future

Here are stories from Udacity students about what they wish they knew in the past, what they’re doing in the present, and what they hope to do in the future!

Our Very Own Grand Challenge

Chris Gundling

A self-managed team of Udacity students from around the world competed at the Self-Racing Cars event in California last month. They put in a ton of work and here’s what they learned:

“On February 15, Udacity selected the group of 18 talented engineers (out of hundreds of applicants) to form the Self-Racing Cars team. Our team was composed of individuals with largely varying backgrounds from all over the world, with the commonalities that we were all enrolled in the Udacity Self-Driving Car Nanodegree program, and extremely passionate about autonomous vehicles. The team was given six weeks to develop the software to drive an autonomous vehicle around the track at Thunderhill Raceway for the Self-Racing Cars event.”

Note to My Past Self: Pro Tips for Term 1 of the Udacity Self-Driving Car Nanodegree

Daniel Wolf

Daniel, bless his heart, put together a terrific list of tips and tricks for Term 1 of the Nanodegree Program. He would know, after having mentored over 40 students!

“If I could send myself a note back in time to 6 months ago, I would probably find something more valuable than sending myself mentorship tips for Term 1 of the Udacity Self-Driving Car Nanodegree. That being said, I would have wanted to know these points soon after being accepted into the selective inaugural cohort in October 2016. I have mentored over 40 students after having some success in the SDC Nanodegree myself, and this post will reveal the pointers that have been most relevant to my mentees.”

Finding Lane Lines with openCV

Eirik Kvalheim

Check out how Eirik built these super-cool lane line GIFs!

“This project is the first among several projects in the Self Driving Car Engineer program at Udacity. Here we learn cutting edge technology equipping us with the tools for a career in the field of Self Driving Cars. Udacity calls it a “Nanodegree”, but it lasts over 9 months and with all the hours I am putting into this, it really becomes a full education for me. So that brings me to this project, which was so much fun I just had to stop myself, I could go on forever, there is always something to do better, and so much good Inspiration!”

Self-driving Cars — OpenCV and SVM Machine Learning with Scikit-Learn for Vehicle Detection on the Road

Riccardo provides a terrific and thorough walkthrough of his vehicle detection project. I especially like the experiments he ran with color spaces and histograms.

“First, we identify and extract the features from the image, and then use it to train a classifier. Next, we execute a window search on the image, on each frame from the video stream, to reliably identify and classify the vehicles. Finally, we must deal with false positives and estimate a bounding box for vehicles detected.”

Diving into the world of self-driving cars

Michael Virgo

I love Michael’s story of leaving his Big Four accounting job to become a self-driving car engineer. He completed Term 1!

“Luckily in Silicon Valley many people are more focused on what you can do than simply how many years you’ve been doing something. For Udacity’s part, they’ve provided me with a mentor and lots of career content, as well as access to events with some great hiring partners, that also give me great hope that I’ll be able to make the jump to working directly on self-driving cars.”

V2X Startups

Nanalyze has a brief roundup of six vehicle-to-vehicle (V2V) startups to watch. What’s striking to me is how many of them have been around for quite a while — almost a decade in some cases:

  • Autotalks: automotive-grade communication chips
  • Cohoda Wireless: automotive-grade communication chips
  • Kymeta: automotive satellite communications
  • RoboCV: collision avoidance with vehicle-to-vehicle communication
  • Savari: vehicle-to-anything communication infrastructure
  • Veniam: automotive mesh WiFi

Vehicle-to-vehicle communication is really exciting — imagine the hypothetical world with no traffic lights, because cars communicate with each other and weave through intersections.

This hypothetical future, though, might be a long ways out. V2V suffers right now from being at the losing end of a network effect — because almost nobody has V2V technology in their cars, it’s not particularly valuable to have V2V technology in your own car.

This is a surmountable problem (see, for instance, the early history of the telephone), but it might take a little while to get there.