The “Career Services Available to You” Lesson

The guiding star of the Udacity Self-Driving Car Engineer Nanodegree Program is jobs. Everything we do ultimately connects to preparing students to become autonomous vehicle engineers.

For that reason, Udacity has invested heavily in career support for students. Every student has access to optional projects where they can get personalized reviews of résumés and cover letters, as well as guidance for online profiles on sites liked LinkedIn, GitHub, and Udacity’s own Career Portal.

In this lesson, students hear from our Careers Team about the services and extracurricular professional activities that Udacity offers for students.

There are also videos from three of our content partners in the Nanodegree — Mercedes-Benz, NVIDIA, and Uber ATG — explaining what it’s like to work with them, how to get a job with them, and the value the Nanodegree Program provides.

We also have pointers to extracurricular lessons that are available to all Nanodegree students on two topics: “Job Search Strategies”, and “Networking”.

The “Job Search Strategies” lesson covers how to build a résumé and cover letter tailored to a specific job, as well as strategies for finding that job.

The “Networking” lesson offers tips for building your personal brand and developing a network that can push job opportunities in your direction. There are also optional projects through which you can get personal reviews of your GitHub, LinkedIn, and Udacity profiles.

The “Finding Lane Lines” Project

Udacity Self-Driving Car Engineer Nanodegree program

The second lesson of the Udacity Self-Driving Car Nanodegree program is actually a lesson followed by a project. In “Finding Lane Lines”, my colleague Ryan Keenan and I teach students how to use computer vision to extract lane lines from a video of a car driving down the road.

Students are able to use this approach to find lane lines within the first week of the Nanodegree program! This isn’t the only way to find lane lines, and with modern machine learning algorithms it’s no longer the absolute best way to find lane lines. But it’s pretty effective, and it’s amazing how quickly you can get going with this approach.

Here’s a photo of Interstate 280, taken from Carla, Udacity’s own self-driving car:

The first thing we’re going to do is convert the image to grayscale, which will make it easier to work with, since we’ll only have one color channel:

Next, we’ll perform “Canny edge detection” to identify edges in the image. An edge is place where the color or intensity of the image changes sharply:

Now that we have the edges of the image identified, we can use a technique called a “Hough transform” to find lines in the image that might be the lane lines we are looking for:

All of these tools have various parameters we can tune: how sharp should the edges be, how long should the lines be, what should the slope of the line be. If we tune the parameters just right, we can get a lock on our lane lines:

Apply these lane lines to the original image, and you get something like this “Finding Lane Lines” project, submitted by our student Jeremy Shannon:

Pretty awesome for the first week!

The “Welcome” Lesson

Udacity Self-Driving Car Engineer Nanodegree program

“Welcome” is the first of 20 lessons in Term 1 of the Udacity Self-Driving Car Engineer Nanodegree program.

This is an overview lesson in which we introduce:

We also cover the history of self-driving cars, the logistics of how Udacity and this Nanodegree program work, and the projects that students will build throughout the program.

I’ll let Sebastian share that last bit:

Next up, the “Finding Lane Lines” project!

Blogging the Udacity Self-Driving Car Engineer Nanodegree Program

Carla, the Udacity Self-Driving Car!

For the last year and a quarter, I’ve been working with a team at Udacity to build the Self-Driving Car Engineer Nanodegree program. This is a nine-month program that prepares software engineers for jobs working on autonomous vehicles.

Over the coming weeks and months, I’m going to produce a new post about each of the lessons in the Nanodegree program, to help you explore what you can learn. As of right now, there are 67 lessons, so I anticipate this process will take me several months to complete. But I’m excited to spend time reviewing and sharing what we’ve built!

During our program we cover: computer vision, deep learning, sensor fusion, localization, path planning, control, advanced electives, and finally system integration. In the final part of the program, students even get to put their own code on Carla, Udacity’s actual self-driving car.

I’ll start today with a quick post about our 1st lesson, which is entitled: “Welcome”.

No Hands

Waymo recently invited a group of journalists, including TechCrunch’s Darrell Etherington, on a tour of their Castle testing facility for self-driving cars. (“Castle” was the name of the Air Force base that occupied the site before Waymo took over,)

Etherington wrote three posts based on the visit, all of which are worth reading.

“Building the best possible driver inside Waymo’s Castle” is short and sets the stage for the next two posts, although this first post doesn’t really break any new ground for those of us who’ve read about Castle already.

“Structured Testing sounds kind of complicated but it’s actually explained in the name — Waymo sets up (structures) tests using its self-driving vehicles (the latest generation Chrysler Pacifica-based test car in the examples we saw), as well as things they call “fauxes” (pronounced “foxes” by [Stephanie Villegas, Waymo’s Structured Testing Lead]. These are other cars, pedestrians, cyclists and other variables (contractors working for Waymo) who replicate the real world conditions that Waymo is trying to test for. The team runs these tests over and over, “as many times as we can where we’re still seeing improvement” per Villegas — and each time the conditions will vary slightly since it’s real-world testing with actual human beings.”

“Taking a truly driverless ride in Waymo’s Chrysler Pacifica” covers Etherington’s first ride in a vehicle that literally had nobody in the driver’s seat. California recently legalized this type of testing on public roads, and although I haven’t see anybody do it on actual public streets yet, I had figured Waymo must have been doing this at Castle. Now we know.

“I’ve done a lot of self-driving vehicle demos, including in Waymo’s own previous-generation Lexus test vehicles, so I wasn’t apprehensive about being ferried around in Waymo’s Chrysler Pacifica minivan to begin with. But the experience still took me by surprise, in terms of just how freeing it was once it became apparent that the car was handling shit all on its own, and would continue to do so safely regardless of what else was going on around it.”

“Waymo focuses on user experience, considers next steps” provides the best look I’ve seen inside of Waymo’s self-driving Chrysler Pacifica minivans. The emphasis is on the seatback monitors that communicate to riders what the virtual driver system is “thinking”.

““It’s really key for riders to focus their attention on the critical elements for a given situation,” [Waymo UX leader Ryan Powell] says, explaining why they’ve chosen to exclude some visual elements, and to do things like place flashing highlights on any emergency service vehicles picked up by the Waymo sensor suite.”

The kicker?

“When asked directly for a timeline on a public service launch, Waymo CEO John Krafcik declined to even claim a specific year, but he did say it’s probably going to happen sooner than many would believe.”

Baidu Apollo

Last week I went with several of my Udacity colleagues to Baidu’s Sunnyvale, California, office to attend a Meetup they held for their Apollo open-source self-driving car platform.

Baidu, which is often referred to as “the Google of China”, is pouring a ton of money and attention into Apollo, and hopes it will become the platform on which other developers build their autonomous vehicle projects. Keep an eye on it.

Here are some clips from the event:

Deep Learning Projects by Udacity Students

Udacity democratizes education by bringing world-class instruction to students around globe. Often, we’re humbled to see how students build on that education to create their own projects outside of the classroom.

Here are five amazing deep learning projects by students in the Udacity Self-Driving Car Engineer Nanodegree Program.

HomographyNet: Deep Image Homography Estimation

Mez Gebre

Mez starts off with a plain-English explanation of what isomorphism and homography are. Homography is basically the study of how one object can look different when viewed from different places. Think about how your image of a car changes when you take a step to the left and look at it again.

After the conceptual explanation, Mez dives into the mechanics of how to combine computer vision, image processing, and deep learning to train a VGG-style network to perform homography.

I imagine this could be a useful technique for visual localization, as it helps you stitch together different images into a larger map.

“HomographyNet is a VGG style CNN which produces the homography relating two images. The model doesn’t require a two stage process and all the parameters are trained in an end-to-end fashion!”

ConvNets Series. Image Processing: Tools of the Trade

Kirill Danilyuk

Kirill uses the Traffic Sign Classifier Project from the Nanodegree Program as a jumping off point for discussing approaches to image pre-processing. He covers three approaches: visualization, scikit-learn, and data augmentation. Critical topics for any perception engineer!

“Convnets cannot be fed with “any” data at hand, neither they can be viewed as black boxes which extract useful features “automagically”. Bad to no preprocessing can make even a top-notch convolutional network fail to converge or provide a low score. Thus, image preprocessing and augmentation (if available) is highly recommended for all networks.”

Launch a GPU-backed Google Compute Engine instance and setup Tensorflow, Keras and Jupyter

Steve Domin

We teach students in the Nanodegree Program how to use Amazon Web Services to launch a virtual server with a GPU, which accelerates training neural networks. There are alternatives, though, and Steve does a great job explaining how you would accomplish the same thing using Google Cloud Platform.

“Good news: if it’s your first time using Google Cloud you are also eligible for $300 in credits! In order to get this credit, click on the big blue button “Sign up for free trial” in the top bar.”

Yolo-like network for vehicle detection using KITTI dataset

Vivek Yadav

Vivek has written terrific posts on a variety of neural network architectures. In this post, which is the first in a series, he prepares YOLO v2 to classify KITTI data. He goes over six pre-processing steps: learning bounding boxes, preprocessing the ground truth bounding boxes, preprocessing the ground truth labels, overfitting an initial network (a Vivek specialty), data augmentation, and transfer learning.

“ YOLOv2 has become my go-to algorithm because the authors correctly identified majority of short comings of YOLO model, and made specific changes in their model to address these issues. Futher YOLOv2 borrows several ideas from other network designs that makes it more powerful than other models like Single Shot Detection.”

DeepSchool.io

Sachin Abeywardana

Sachin has built an 18 lesson curriculum for deep learning, hosted via GitHub, called DeepSchool.io. The lessons start with the math of deep learning, take students through building feedforward and convolutional networks, and finish with using LSTMs to classify #FakeNews! Yay, 21st century America.

Goals

Make Deep Learning easier (minimal code).

Minimise required mathematics.

Make it practical (runs on laptops).

Open Source Deep Learning Learning.

Ford to Test Self-Driving Cars in 2018

I was super-excited to read that Ford plans to launch self-driving cars in a test market next year, according to the CNBC writeup of Ford’s Q3 earnings call.

As a Ford alumnus and a Ford owner and a big fan of the company, I have been increasingly distressed that Ford does not have (that I know of) a fleet of self-driving cars out on real roads every day. Sounds like that will change next year, and I’m excited to watch it happen.

“Ford will bring autonomous vehicles to a test market in 2018, said Ford CEO Jim Hackett on Thursday.

Hackett did not specify where the test will take place or provide many more details…”

Automotive Companies and Venture Capitalists

Paul Lienert has an interesting piece in Reuters today about self-driving car startups. The piece touches on a few things: a particular startup called Nullmax, the geography of self-driving car startups, and valuations.

Two things caught my eye in the piece, though. One is the outsized role of Israeli startups in the autonomous vehicle space. There are relatively few Israeli startups working on full, end-to-end self-driving cars, but Reuters counts more Israeli startups in the perception and automotive connectivity spaces than the US has, respectively.

It’s notoriously difficult to count startups and I’m not sure I quite believe that Israel has more startups in any sector than in the US, but it’s nonetheless worth considering Israel as one of the world’s centers for autonomous technology.

The other part of the article that caught my eye is the dichotomy between how venture capitalists view autonomous startups and how traditional automotive companies view the same startups:

“While big automotive and technology companies are pouring billions into the autonomous vehicle space, Silicon Valley investors so far have been fairly restrained in increasing their bets.”

On the one hand:

With the notable exceptions of Andreessen Horowitz and New Enterprise Associates, few of the big Valley venture capital firms are heavily invested in the sector. Overall, only seven of the top 30 self-driving startups have received later-stage funding…

On the other hand:

All told, U.S. automotive and technology firms likely have invested some $40 billion to $50 billion in self-driving technology in recent years, mainly through acquisitions and partnerships…

Among the top corporate investors in the sector are Samsung Group [SAGR.UL], Intel Corp (INTC.O), Qualcomm Inc (QCOM.O), Delphi and Robert Bosch GmbH [ROBG.UL].

Read the whole thing.

Delphi Buys nuTonomy

Big news in the automotive world yesterday is that automotive supplier Delphi purchased self-driving startup nuTonomy for $450 million.

A few thoughts:

  1. At a lunch a while ago, I sat next to nuTonomy CEO and former MIT researcher Karl Iagnemma. He seemed both nice and humble and super-smart. He’s also probably the most successful person to send me a LinkedIn connection request, so obviously I’m a fan.
  2. Among startups (e.g. not Uber), nuTonomy seems to have a big lead in terms of actual autonomous vehicles being tested out on the road.
  3. Delphi and nuTonomy were both independently testing self-driving cars in Singapore for the last year, so presumably they got to know each other pretty well.
  4. Delphi is one of the world’s premier automotive suppliers, but they’ve been moving into the self-driving car industry in a way that sets them up as competitive with automotive manufacturers. This purchase further complicates that industry dynamic.
  5. Following up on #3, Delphi’s multi-domain controller was at one point positioned as the core computation platform for autonomous vehicles. It might become less attractive to automotive manufacturers, as they won’t want to purchase key components from a potential competitor.
  6. On the other hand, Delphi’s expertise in autonomous driving sets it apart from many other suppliers, all the more so due to this acquisition. If Delphi components become much more effective than the alternatives, the competitive vendor-supplier dynamics might matter less.
  7. nuTonomy’s $450 million acquisition tag is fantastic, but not quite as mind-boggling as $1 billion GM paid for Cruise, or the $680 million Uber paid for Otto. This is especially true given how much further along nuTonomy appears to be than Cruise or Otto were at acquisition. Maybe valuations in the self-driving car market are slowing down just a tiny bit.
  8. Somewhat surprisingly, Axios reports that nuTonomy was on the hunt for more funding, but couldn’t raise it at a valuation they liked. So they sold the company instead. I haven’t seen this reported elsewhere, but if it’s true, that’s another indication that self-driving car valuations may be coming down just a bit.