Udacity CarND Student Posts on Suspense, Lane Lines, Transfer Learning, and Building a Deep…

Tremendous posts this week from Udacity Self-Driving Car Engineer Nanodegree Program students.

If you want to learn what these folks are learning, apply to join us!

My first step towards building a Self Driving Car !!

Sujay Babruwad

Sujay has an easy-to-follow recipe for building the first project in the Nanodegree Program — a software pipeline for finding lane lines:

“1. Convert the image to gray scale

2. Apply Gaussian blur to the image to smoothen the image

3. Apply canny edge detection algorithm

4. Apply a filter to remove the unwanted area in the image, like the one above the horizon

5. Apply hough transform and plot the lines that are formed from the points detected in the canny edge step.

6. Based on the slope find the left lines and right lines

7. Find the largest left and right lines.

8. Consider an imaginary horizontal line in the middle of image and another line at the bottom of the image.

9. Find the intersect of largest left and right lines on these imaginary lines using the Cramer’s rule. Plot a line with these intersect point.

The Grand Prix of Nevada of autonomous vehicles

Arnaldo Gunzi

Arnaldo presents a gripping re-telling of the 2005 DARPA Grand Challenge! If you want the full version, you can watch The Great Robot Race, but for the Cliffs Notes:

And the inevitable happens. The most epic moment in the history of autonomous cars! The first overtaking in the history of autonomous cars in the planet! Stanley overtakes Highlander! Ayrton Senna overtakes Alain Proust (sorry Frenchs, I’m Brazilian)!

Transfer Learning for Behavioral Cloning

Kosuke Murakami

Kosuke provides a great guide for how to use transfer learning for a behavioral cloning problem. Specifically, Kosuke uses VGG16 and applies techniques like batch normalization and parameter reduction:

“The model based on transfer learning could be really sensitive to noise data although it is not validated in any papers. In conclusion, cleaning data made my model much better.”

Detecting Lane Lines in Python with OpenCV

David Lichtenberg

David has a super easy-to-follow post about how to find lane lines on the road, which is the first project in the Udacity Self-Driving Car Engineer Nanodegree Program:

“We just stepped through a practical image processing pipeline using opencv and python! It goes without saying, this is not road ready, but it’s exciting to see how much we can do with a little image manipulating and line filtering logic. What a nice starting point.”

Meet Fenton (my data crunching machine)

Alex Staravoitau

Alex’s post goes above and beyond other posts that walk through building your own deep learning machine. He provides an add-on section that describes how to set up the machine as an always-on server, accessible from anywhere:

“Arguably the most important step is picking your machine’s name. I named mine after this famous dog, probably because when making my first steps in data science, whenever my algorithm failed to learn I felt just as desperate and helpless as Fenton’s owner. Fortunately, this happens less and less often these days!”

Intel Buys Mobileye

http://www.autonews.com/article/20170313/MOBILITY/170319961/intel-to-buy-autonomous-tech-firm-mobileye-in-14-7-billion-deal

Huge news in the automotive world — Intel is acquiring computer vision supplier Mobileye for $15 billion.

Intel has been working on self-driving cars from multiple angles, including partnerships with BMW and Delphi and Mobileye, as well as standalone compute platforms. But NVIDIA has emerged as the dominant supplier for automotive processors, and Intel’s had a hard time catching up.

With the purchase of Mobileye, Intel will come at autonomous vehicles from a whole new angle — as a sensor and computer vision supplier.

This is an area that Mobileye has dominated, so Intel is essentially buying its way into that vertical. Presumably the plan is to use Intel’s manufacturing and marketing power to scale this business dramatically, and use it to tie back to Intel computational platforms.

It seems like a pretty good plan, although it remains to be seen if Intel will allow Mobileye to take advantage of NVIDIA GPUs.

Autonomous News Roundup

https://9to5mac.com/2017/03/08/didi-self-driving-lab-mountain-view/

Big news out of Udacity’s Intersect Conference!

  1. Didi will be launching a Silicon Valley Lab to focus on autonomous vehicles.
  2. Didi and Udacity are teaming up for a $100k Self-Driving Car Challenge, open to all comers!

https://9to5mac.com/2017/03/08/didi-self-driving-lab-mountain-view/

Sounds like what Andrew Ng has been preaching at Baidu for the last year or two.

https://9to5mac.com/2017/03/08/didi-self-driving-lab-mountain-view/

The California DMV is amending its rules to allow for driverless testing!

https://9to5mac.com/2017/03/08/didi-self-driving-lab-mountain-view/

Chris Brewer, the Chief Engineer of Ford’s autonomous vehicle program, provides a great reminder that there is more to a self-driving car than just the virtual driver system. There’s also a car underneath it all 🙂

Toyota Goes Public

A year and a half ago, Toyota announced that it would invest $1 billion into a new entity called the Toyota Research Institute (TRI).

And then…nothing else, really.

TRI has been pretty quiet for 18 months.

A few days ago, though, they broke their silence with a private track demonstration in Sonoma.

The platform is the second generation of the advanced safety research vehicle revealed to the public by Toyota at the 2013 Consumer Electronics Show. It is built on a current generation Lexus LS 600hL, which features a robust drive-by-wire interface. The 2.0 is designed to be a flexible, plug-and-play test platform that can be upgraded continuously and often. Its technology stack will be used to develop both of TRI’s core research paths: Chauffeur and Guardian systems. 
 
 Chauffeur refers to the always deployed, fully autonomous system classified by SAE as unrestricted Level 5 autonomy and Level 4 restricted and geo-fenced operation. 
 
 Guardian is a high-level driver assist system, constantly monitoring the driving environment inside and outside the vehicle, ready to alert the driver of potential dangers and stepping in when needed to assist in crash avoidance.

I’m excited to see Toyota share more of what they’re doing.

This is the world’s largest auto manufacturer, and I assume they will bring their A-game to the table.

The Importance of Maps

The New York Times ran a good story about the importance of maps to self-driving cars, and the relatively few companies that build them.

This is an often-overlooked chokepoint in the self-driving car ecosystem.

Digital maps ease the burden by helping give foresight to a car’s computers, and adding redundancy to the car’s understanding of the situation it faces, said Civil Maps’ chief executive, Sravan Puttagunta. Radar and cameras cannot always recognize a stop sign if pedestrians are standing in the way or the sign has been knocked down, he explained.

“But if the map knows there is a stop sign ahead, then the sensors just need to confirm it,” Mr. Puttagunta said. “Then the load on the sensors and processor is much lower.”

Read the whole thing.

NIO Sets Electric Autonomous Speed Record

The NIO EP9 is officially the fastest electric, autonomous car in the world.

The NIO EP9 electric supercar wasn’t content with merely entering the never-ending vehicular stat war — it recently set a couple of lap records at Austin’s Circuit of the Americas, including one for the fastest production car ever to run there. In case that wasn’t enough, it set a driverless lap record for the track, too. The startup automaker now claims that it is the fastest electric autonomous car around.

Jalopnik reports that NIO engineers built its autonomous software in just four months.

Did I mention that NIO is a hiring partner for Udacity’s Self-Driving Car Engineer Nanodegree Program?

Here’s a question-and-answer session between our students and NIO CEO Padmasree Warrior, with me moderating.

And here’s Padmasree and Sebastian Thrun at Udacity Talks!

Startup Watch: Embark

News broke about a week ago of a new startup called Embark, which is targeting self-driving trucks.

I first heard of this company about a year ago, when they were called Varden Labs. My father-in-law is a retired Sacramento State University administrator, and he was excited that Varden Labs was testing self-driving shuttles on-campus.

The Varden Labs website is still up, and it looks like they are hiring.

Waymo v. Uber

I’m pretty late to the Waymo v. Uber commentary stream, and I don’t have anything substantive to contribute.

Otto has been a great partner to the Udacity Self-Driving Car Engineer Nanodegree Program, and they are genuinely excited to teach people about how to get jobs working on self-driving cars. Our partnership has only gotten better since Otto became Uber ATG. I’ve met Anthony Levandowski briefly and he seems like a gentleman, and I know Sebastian thinks highly of him.

So that’s full disclosure.

My main reaction, though, is just how surprising the topic of the lawsuit is. Google sues Uber and the suit hinges on the design of lidar hardware?

Didn’t see that one coming.

6 Awesome Projects from Udacity Students (and 1 Awesome Thinkpiece)

Udacity students are constantly impressing us with their skill, ingenuity, and their knowledge of the most obscure features in Slack.

Here are 6 blog posts that will astound you, and 1 think-piece that will blow your mind.

How to identify a Traffic Sign using Machine Learning !!

Sujay Babruwad

Sujay’s managed his data in a few clever ways for the traffic sign classifier project. First, he converted all of his images to grayscale. Then he skewed and augmented them. Finally, he balanced the data set. The result:

“The validation accuracy attained 98.2% on the validation set and the test accuracy was about 94.7%”

Udacity Advance Lane Finding Notes

A Nguyen

An’s post is a great step-through of how to use OpenCV to find lane lines on the road. It includes lots of code samples!

“Project summary:
– Applying calibration on all chessboard images that are taken from the same camera recording the driving to obtain distort coefficients and matrix.
– Applying perspective transform and warp image to obtain bird-eyes view on road.
– Applying binary threshold by combining derivative x & y, magnitude, direction and S channel.
– Reduce noise and locate left & right lanes by histogram data.
– Draw line lanes over the image”

P5: Vehicle Detection with Linear SVC classification

Rana Khalil

Rana’s video shows the amazing results that are achievable with Support Vector Classifiers. Look at how well the bounding boxes track the other vehicles on the highway!

Updated! My 99.40% solution to Udacity Nanodegree project P2 (Traffic Sign Classification)

Cherkeng Heng

Cherkeng’s approach to the Traffic Sign Classification Project was based on an academic paper that uses “dense blocks” of convolutional layers to fit the training data tightly. He also uses several clever data augmentation techniques to prevent overfitting. Here’s how that works out:

“The new network is smaller with test accuracy of 99.40% and MAC (multiply–accumulate operation counts) of 27.0 million.”

Advanced Lane Line Project

Arnaldo Gunzi

Arnaldo has a thorough walk-through of the Udacity Advanced Lane Finding Project. If you want to know how to use computer vision to find lane lines on the road, this is a perfect guide!

“1 Camera calibration
2 Color and gradient threshold
3 Birds eye view
4 Lane detection and fit
5 Curvature of lanes and vehicle position with respect to center
6 Warp back and display information
7 Sanity check
8 Video”

Build a Deep Learning Rig for $800

Nick Condo

I love this how-to post that lists all the components for a mid-line deep learning rig. Not too cheap, not too expensive. Just right.

Here’s how it does:

“As you can see above, my new machine (labeled “DL Rig”) is the clear winner. It performed this task more than 24 times faster than my MacBook Pro, and almost twice as fast as the AWS p2.large instance. Needless to say, I’m very happy with what I was able to get for the price.”

How Gig Economy Startups Will Replace Jobs with Robots

Caleb Kirksey

Companies like Uber and Lyft and Seamless and Fiverr and Upwork facilitate armies of independent contractors who work “gigs” on their own time, for as much money as they want, but without the structure of traditional employment.

Caleb makes the point that, for all the press the gig economy gets, the end might be in sight. Many of these gigs might soon be replaced by computers and robots. He illustrates this point with his colleague, Eric, who works as a safety driver for the autonomous vehicle startup Auro Robotics. Auro’s whole mission is to eliminate Eric’s job!

“Don’t feel too bad for Eric though. He’s become skilled with hardware and robotics. His experience working in cooperation with a robot can enable him to build better systems that don’t need explicit instructions.”

The Self-Driving Polity that is Arizona

What’s different is that this time, Uber has the blessing from Arizona’s top politician, Governor Doug Ducey, a Republican, who is expected to be “Rider Zero” on an autonomous trip along with Anthony Levandowski, VP of Uber’s Advanced Technologies Group. The Arizona pilot comes after California’s Department of Motor Vehicles revoked the registration of Uber’s 16 self-driving cars because the company refused to apply for the appropriate permits for testing autonomous cars.

As Louis Brandeis said, the states are “laboratories of democracy”.

Read the whole thing (it’s short).