Livestream with Bosch — Today

This afternoon, at 6pm PDT, Udacity will be hosting a livestream with Angela Klein of Bosch.

Bosch is the world’s largest automotive supplier, and a hiring partner that wants to employ lots of Udacity Self-Driving Car students!

Angela is a real live wire and a lot of fun to be around.

Tune in to hear what it takes to break into one of the best companies in the automotive industry!

https://youtu.be/rIec-e14DrEand

Udacity Students on Computer Vision and World Travel

If you want to explore different areas of computer vision, you should check out these awesome posts by Udacity students on different ways to use OpenCV to find lane lines.

And if you want to learn about the Udacity Self-Driving Car Engineer Nanodegree Program, there’s a post on that, too!

Plus a post on world travel, for good measure 🙂

The Udacity Self Driving Car Nanodegree — Term 1

Arnaldo Gunzi

Upon completing all of his Term 1 projects, Arnaldo wrote a high-level overview of all of the projects, and reflected on what he learned:

“It had a very practical focus: theory enough to understand the core concepts, and then, the practical application. It is a reason why it requires a lot of background. It is not a course on basic python, or basic neural networks, but how to apply it in real cases.”

Advanced Lane Finding Project

Sujay Babruwad

Sujay has an incredibly thorough analysis of his computer vision pipeline for lane-finding, including a great debugging tool:

“This project involves fine tuning of lot of parameters like color thresholding, gradient thresholding values to obtain the best lane detection. This can be trickier if the pipeline fails for few video frames. To efficiently debug this I had to build a frame that captures multiple stages of the pipeline, like the color transformation, gradient thresholding, line fitting on present and averaged past frames.”

Self-Driving Car Engineer Diary — 6

Andrew Wilkie

Andrew has posts fun images from his lane-finding pipeline, but what really caught my eye was his analysis of Udacity’s Career Services:

“The SDC Engineer course emphasises job readiness and the Udacity team provides an excellent Careers Service built right in. We were asked to search for an advertised job that interested us, provide a resume tailored to either Entry Level, Prior Experience or Career Change and the associated Cover Letter. For a ‘career changer’ like me, I was surprised by the amount of self-reflection this caused.”

10 weeks, 3 countries and 5 self-driving car projects

Morgane LUSTMAN

Morgane is completing CarND while on a nine-month world tour, starting in Ecuador!

“Our next stop was Cuenca, Ecuador, where my fiance’s immediate family lives. I had an amazing time there, visiting the city and Cajas National Park. The only issue is that people did not understand why I was spending so much time on my laptop! They expected me to be free all the time since I was on holidays. I had set up a routine where I’d work for a few hours in the morning while waiting for people to get up and at night. After explaining to them what my goal was and showing them what I was doing, they were definitely more understanding. They got particularly interested when I showed them how I had trained a neural network to drive a car in a simulator, and how I used computer vision and machine learning to recognize lanes and other vehicles on the road.”

Temporal Smoothing to Remove Jitter in Detected Lane Lines

Liam Bowers

Liam took the introductory Udacity Lane-Finding Project and optimized it:

“I created a buffer to store the slope and y-intercept values for each line detected in the last N frames. The actual line drawn on the current frame is simply the average slope/intercept of all these lines. By continuously pushing the latest detected line onto this buffer and simultaneously dropping the oldest line, I can calculate a rolling mean of the lines over time, or what I call “temporal smoothing”.”

Working at NVIDIA

One of my favorite parts of the Udacity Self-Driving Car Engineer Nanodegree Program is the tremendous partners that are supporting us in training autonomous vehicle engineers.

The very first partner to sign up was NVIDIA. The NVIDIA team is super-excited about the Udacity Nanodegree Program and is actively interviewing students in the program, even before they graduate.

If you’d like to learn more about how NVIDIA drives autonomous vehicle technology, watch the video we made with them:

Udacity Students Explain Kalman Filters, Mini AVs, and Computer Vision

Here are some great explanatory posts from Udacity Self-Driving Car students about Kalman filters, computer vision, and how to build a mini autonomous vehicle.

Kalman filter: Intuition and discrete case derivation

Vivek Yadav

Vivek has transposed some of the notes from his Advanced Controls course at SUNY-Stony Brook. These notes are great for understanding the intuition of how Kalman filters reduce uncertainty:

“This process of combining system dynamics with state measurements is the underlying principle of Kalman filters. Kalman filters provide good estimation properties and are optimal in the special case when the process and measurement follow a Gaussian distributions.”

Vehicle Detection and Distance Estimation

Milutin N. Nikolic

Milutin provides a clear and thorough explanation of his pipeline for detecting vehicles using HOG and Linear SVM:

“The goals/steps of this project are the following:

Extract the features used for classification

Build and train the classifier

Slide the window and identify car on an image

Filter out the false positives

Calculate the distance

Run the pipeline on the video”

Vehicle Detection and Tracking using Computer Vision

Arnaldo Gunzi

I like the intuitive explanation that Arnaldo provides for the histogram of oriented gradients (HOG) algorithm:

“The HOG extractor is the heart of the method described here. It is a way to extract meaningful features of a image. It captures the “general aspect” of cars, not the “specific details” of it. It is the same as we, humans, do: in a first glance, we locate the car, not the make, the plate, the wheel, or other small detail.

HOG stands for “Histogram of Oriented Gradients”. Basically, it divides an image in several pieces. For each piece, it calculates the gradient of variation in a given number of orientations.”

Toy Autonomous Car

Srikanth Pagadala

I love the mini autonomous vehicle that Shrek built, and especially I love that he trained a deep neural network so that the car react to traffic signs!

“driver.py — is the heart of the project. It includes the image processing pipeline that identifies the traffic sign from the camera by using the previously trained DNN and then send appropriate driving control signals to the car.”

Advanced Lane Lines — Challenge Videos Try

Alena Kastsiukavets

Alena went beyond the minimum requirements for the Advanced Lane Finding Project, and she got her computer vision pipeline to work on the challenge video:

“When the first line was identified successfully, I use line equation with margin as an area to search for the next line. No need to do Sliding Window Search again.”

Anthony Levandowski, Force of Nature

Source: Asianet News

If you’re looking for a weekend longread, I recommend “Fury Road: Did Uber Steal the Driverless Future from Google”, written by Max Chafkin and Mark Bergen in Bloomberg.

Although the headline is about Uber and Google, the article is really about current Uber executive, and former Google executive, Anthony Levandowski.

The article is a kind of mini-autobiography of Levandowski, with a particular emphasis on the latest stages in his career — Google (now Waymo), then Otto, then Uber Advanced Technologies Group.

The article doesn’t take a strong position on the merits of Google’s lawsuit against Uber and Levandowski. It’s more useful as background information on some of the most important individuals and companies in the industry.

Before I drop in some quotes, I should mention that I have a number of connections to people who appear in the article. My boss, Sebastian Thrun, launched the Google Self-Driving Car program. The article doesn’t quote him, but he does appear in a few cameos. Additionally, Otto, and now Uber ATG, has been a terrific partner of the Udacity Self-Driving Car Engineer Nanodegree Program. I’ve met Levandowski briefly, and he seems like good people.

Here are some of the quotes that stuck with me:

“At 16 he started a web design firm that a former colleague says made him a millionaire by the end of high school.”

“Ghostrider, his self-balancing, self-driving motorcycle, was the only two-wheel vehicle in the contest [the 2004 DARPA Challenge].”

“Anthony is a rogue force of nature,” says a former Google self-driving car executive. “Each phase of his Google career he had a separate company doing exactly the same work.” According to two former Google employees, founders Page and Sergey Brin tolerated Levandowski’s freelancing because they saw it as the fastest way to make progress. Google’s car team embraced Levandowski’s nature, too. The attitude, says a former colleague, was “he’s an asshole, but he’s our asshole.”

Read the whole thing.

Udacity Students on Computer Vision and Deep Learning

Here are five Udacity Self-Driving Car students that went above and beyond the project requirements by using trigonometry, color spaces, dashcams, buffers, and teaching other students.

Alright Squares; Let’s Talk Triangles

Andrew Hogan

Andrew digs way back into high school trigonometry to investigate the relationship between the multiple cameras used in the Behavioral Cloning Project. The result is a pretty awesome run:

“As the car is making a stronger turn from the center angle, it can be safely assumed that the distance between the center camera and its goal (point A) is shrinking in relation to when point A was a significant distance away from the car — when steering angle D was 0. This can easily be seen in how far away the most distant center point on the road is when the car is driving straight versus when it is in a 25 degree turn. After getting tired of manually calculating SOH CAH TOA/law of cosines and sines over and over again, I wrote a couple python scripts to chart out what the change in the angle would be given a center angle and a distance between each camera along with a distance to the goal at 0 degrees and the distance to the goal at the given angle.”

Finding Lane Lines with Colour Thresholds

Joshua Owoyemi

Joshua is a Nigerian PhD student at a Japanese university, making him a perfect example of Udacity’s goal of democratizing education. He also does a terrific job experimenting with various color spaces for isolating lane lines:

“We know that white is represented by (255,255,255) in both RGB and HSL colour space, but we have to ascertain the values for the yellow lane colour. To do this, I pulled out colour picker from the Inkscape software, just to have a visual representation.”

My Dashcam And My Lane Finding Algorithm

Chris Edwards

Chris bought a dashcam and super-imposed his lane-finding algorithm on one of his regular routes — up the 5 in San Diego:

“I found it eerily calming to safely watch a drive I do way too often. It’s like the difference between watching a horse and being dragged behind one. The video starts just after what I believe to be the most dangerous on-ramp in the world. Having survived that, we can check out the camera, my driving, and my algorithm’s ability to find lane lines.”

CarND Project 1: Lane Lines Detection — A Complete Pipeline

Kirill Danilyuk

Kirill goes way beyond the requirements in his initial lane-finding project, and builds a computer vision pipeline that can identify lanes in all sorts of conditions. In particular, he develops a nice stability heuristic:

“Lane lines stability. This is an important issue to be addressed. There are several stabilization techniques I used:

1. Buffers. My lane line objects memorize N recent states and update the buffer itself by inserting a line state from the current frame.

2. Smarter lane line state updates. If we still get noisy data after our filtering efforts, line fitting can easily go wrong. If we see that the estimated slope of the fitted line from the current frame differs too much from the buffer’s average, we need to treat this line more conservatively. For this very purpose, I created DECISION_MAT, which is a simple decision matrix on how to combine current line position and buffer’s average position.”

Teaching a car to drive with deep learning

Robin Stringer

Robin has an easy-to-follow writeup of my sequence of videos demonstrating an iterative approach to completing the Behavioral Cloning Project. Start with some basic data, normalize it, augment it, add side cameras, and keep going:

“This is the stage at which the car began to make it around the track successfully for me. The behavioral cloning project is a great lesson in the fun and value in experimentation when working with deep learning. Altering model architecture and parameters help us build an intuition of how convolutions, drop out layers and subsamples work together to make a useful model.”

Udacity Students on Deep Learning, Hiring, and CarND

Five Udacity students muse about deep learning, hiring, and the Self-Driving Car Program.

Behavioral Cloning — Self-Driving Car Simulation

Jonathan Mitchell

Jonathan’s post has a really nice walkthrough of his behavioral cloning project, including a visual explanation of the data pre-processing pipeline:

“Our data comes in as 160 x 320 x 3 RGB images. I used several preprocessing techniques to augment, transform, and create more data to give my network a better chance at generalizing to different track features.”

Vehicle tracking using a support vector machine vs. YOLO

Kaspar Sakmann

Kaspar has a terrific comparison of vehicle detection pipelines using standard computer vision, compared with a deep learning solution using YOLO:

“A forward pass of an entire image through the network is more expensive than extracting a feature vector of an image patch and passing it through an SVM. Hoever, this operation needs to be done exactly once for an entire image, as opposed to the roughly 150 times in the SVM+HOG approach. For generating the video above I did no performance optimization, like reducing the image or defining a region of interest, or even training specifically for cars. Nevertheless, YOLO is more than 20x faster than the SVM+HOG and at least as accurate.”

Five Skills Self-Driving Companies Need

Caleb Kirksey

Caleb put together an awesome list of autonomous vehicle skills, and which companies are looking for them:

“The one constant in all of the postings is that experience with programming in C++ is a highly sought attribute for self-driving companies. Since performance is so vital for any code running on a real time system, it’s necessary to use a language that can be compiled to machine code for speed.”

Machine versus human learning in traffic sign classification

Arnaldo Gunzi

Arnaldo has a fun comparison of how machine learning compares to human learning, specifically applied to the Traffic Sign Classifier Project:

“Overfit: the guy who has a perfect grade in school, in all subjects, but outside school knows nothing in real world. Or someone who has a phD in nuclear advanced theoretical gravitational quantum physics, but works as a waiter in a restaurant, because his knowledge is so specific it has no real world application.”

Self-Driving Car Engineer Diary — 7

Andrew Wilkie

Andrew has a generous assessment of Term 1 of CarND:

““Amazing projects. Steep learning curve. Strong student community. Incredibly supportive and adaptive Udacity staff. Be prepared to commit 2–3 times estimated 10 hours per week to complete Term 1 successfully as projects encourage experimentation. Now to catch-up on sleep before the start of Term 2 on 24/Mar/2017.”

Bosch Will Package NVIDIA Drive PX

Bosch will be packaging the NVIDIA Drive PX platform for use by automotive manufacturers, which is a huge win for the NVIDIA automotive group.

The automotive supply chain is highly-structured, with Tier 2 suppliers (like NVIDIA) providing specialized products to Tier 1 suppliers (like Bosch) who package them as automotive-grade components that automotive manufacturers (like Mercedes-Benz or BMW) can use.

One of the big challenges from NVIDIA and Intel and any other chipmaker breaking into the automotive space is how to take their incredible technology and fit it into the automotive supply chain, with all of the safety and reliability checks that entails.

Bosch solves that manufacturing and distribution problem for NVIDIA.

Udacity CarND Student Posts on Suspense, Lane Lines, Transfer Learning, and Building a Deep…

Tremendous posts this week from Udacity Self-Driving Car Engineer Nanodegree Program students.

If you want to learn what these folks are learning, apply to join us!

My first step towards building a Self Driving Car !!

Sujay Babruwad

Sujay has an easy-to-follow recipe for building the first project in the Nanodegree Program — a software pipeline for finding lane lines:

“1. Convert the image to gray scale

2. Apply Gaussian blur to the image to smoothen the image

3. Apply canny edge detection algorithm

4. Apply a filter to remove the unwanted area in the image, like the one above the horizon

5. Apply hough transform and plot the lines that are formed from the points detected in the canny edge step.

6. Based on the slope find the left lines and right lines

7. Find the largest left and right lines.

8. Consider an imaginary horizontal line in the middle of image and another line at the bottom of the image.

9. Find the intersect of largest left and right lines on these imaginary lines using the Cramer’s rule. Plot a line with these intersect point.

The Grand Prix of Nevada of autonomous vehicles

Arnaldo Gunzi

Arnaldo presents a gripping re-telling of the 2005 DARPA Grand Challenge! If you want the full version, you can watch The Great Robot Race, but for the Cliffs Notes:

And the inevitable happens. The most epic moment in the history of autonomous cars! The first overtaking in the history of autonomous cars in the planet! Stanley overtakes Highlander! Ayrton Senna overtakes Alain Proust (sorry Frenchs, I’m Brazilian)!

Transfer Learning for Behavioral Cloning

Kosuke Murakami

Kosuke provides a great guide for how to use transfer learning for a behavioral cloning problem. Specifically, Kosuke uses VGG16 and applies techniques like batch normalization and parameter reduction:

“The model based on transfer learning could be really sensitive to noise data although it is not validated in any papers. In conclusion, cleaning data made my model much better.”

Detecting Lane Lines in Python with OpenCV

David Lichtenberg

David has a super easy-to-follow post about how to find lane lines on the road, which is the first project in the Udacity Self-Driving Car Engineer Nanodegree Program:

“We just stepped through a practical image processing pipeline using opencv and python! It goes without saying, this is not road ready, but it’s exciting to see how much we can do with a little image manipulating and line filtering logic. What a nice starting point.”

Meet Fenton (my data crunching machine)

Alex Staravoitau

Alex’s post goes above and beyond other posts that walk through building your own deep learning machine. He provides an add-on section that describes how to set up the machine as an always-on server, accessible from anywhere:

“Arguably the most important step is picking your machine’s name. I named mine after this famous dog, probably because when making my first steps in data science, whenever my algorithm failed to learn I felt just as desperate and helpless as Fenton’s owner. Fortunately, this happens less and less often these days!”

Intel Buys Mobileye

http://www.autonews.com/article/20170313/MOBILITY/170319961/intel-to-buy-autonomous-tech-firm-mobileye-in-14-7-billion-deal

Huge news in the automotive world — Intel is acquiring computer vision supplier Mobileye for $15 billion.

Intel has been working on self-driving cars from multiple angles, including partnerships with BMW and Delphi and Mobileye, as well as standalone compute platforms. But NVIDIA has emerged as the dominant supplier for automotive processors, and Intel’s had a hard time catching up.

With the purchase of Mobileye, Intel will come at autonomous vehicles from a whole new angle — as a sensor and computer vision supplier.

This is an area that Mobileye has dominated, so Intel is essentially buying its way into that vertical. Presumably the plan is to use Intel’s manufacturing and marketing power to scale this business dramatically, and use it to tie back to Intel computational platforms.

It seems like a pretty good plan, although it remains to be seen if Intel will allow Mobileye to take advantage of NVIDIA GPUs.