How to Land an Autonomous Vehicle Job: Networking

I wrote earlier that my formula for landing a job in autonomous vehicles had three parts:

  1. Coursework
  2. Projects
  3. Networking

The first two parts consist of developing skills. The third part, networking, requires selling those skills to the world.

Like all good salesmen, I used a CRM tool, although in this case it was just a spreadsheet. And I filled in that spreadsheet with all the different companies I was excited to talk with.

I didn’t have this vocabulary at the time, but essentially the potential employers in this space break down into segments.

Transportation-as-a-Service

  • Uber
  • Lyft

OEMs

  • Ford
  • Google
  • Tesla
  • Toyota
  • Mercedes
  • etc.

Tier 1 Suppliers

  • Bosch
  • Delphi
  • Continental
  • etc.

Tier 2 Suppliers

  • NVIDIA
  • Mobileye
  • Velodyne
  • etc.

With limited exceptions, these companies base their autonomous vehicle teams in three places: Michigan, Silicon Valley, and Germany. So it’s worth considering your willingness to live in those places.

The next step is to scan the careers pages of these companies and apply for the positions that interest you. These cold applications rarely lead to jobs on their own, but once you get to the right person within the company, they will usually ask whether you have applied for jobs via the website. It’s helpful to be able to answer that question yes right off the bat.

Finding the right person to contact is often laborious, but this is the key step. I leveraged my own network, which was great, but my most promising leads, including at Ford, actually came from cold emails. I found recruiters or hiring managers on LinkedIn and then sent messages. I would follow up on these messages two or three times each before giving up.

The final step was building my CV to focus on my AV work. Since my previous professional experience was less relevant, I pushed that to a second page and filled the first page of my CV with all of the courses and projects and keywords that I wanted autonomous vehicle people to see.

After that, if all goes well, will come interviews, both formal and informal. Maybe some programming challenges.

Interviews are more about luck than predicting employee success (that’s literally true, according to personnel selection research), so it helps to have a lot of interviews and hopefully you’ll get lucky at least once.

Fortunately, I did 🙂

GM’s New Test Track

GM just announced a new technology center in Warren, Michigan, that will focus on self-driving cars. This follows on the heels of GM’s acquisition of Cruise Automation and of GM’s commitment to hire hundreds of self-driving car engineers in Canada.

To me, one of the most exciting elements of this announcement is GM’s plan to build a self-driving vehicle-focused test track.

Test tracks are a major obstacle to autonomous vehicle development in Silicon Valley, because the land is just so expensive, so regulated, and so hard to bundle together into a large parcel.

Google’s test track is a decommissioned air force base in the Central Valley, hours away from the Mountain View campus.

One of the big advantages of vehicle development in the midwest is the relative bounty of cheap, greenfield land.

How to Land an Autonomous Vehicle Job: Projects

A few days ago I outlined the three components of my effort to land a job working on autonomous vehicles:

  1. Coursework
  2. Projects
  3. Networking

A few days ago I wrote about coursework and many of the online courses that are available.

The projects that I undertook were mostly distinct from the coursework. The three big projects I worked on were:

  1. Lane Detection
  2. Self-Driving Sumobot
  3. This Blog

Lane Detection

The lane detection software was the most immediately gratifying of those projects.

There are a several free collections of road images online, or you could even create your own using a mobile phone in your car.

Then, using OpenCV and a sequence of Canny edge transforms and Hough filters and perspective warps, I was able too identify images on the road. If I were to do the project now, using what I’ve learned since, I’d probably also look at connected components algorithms and gradient contrasts.

I even got to use the Twiddle algorithm I learned from Sebastian Thrun’s AI for Robotics course on Udacity.

When it’s all done, you can run the images together like a video.

Sumobot

This project seemed the most exciting when I started, but it turned out to be a little bit of a bust.

I bought a Zumobot from Pololu and began trying to program it to drive itself.

I got some basic driving maneuvers working, but I started this project too early in my robotics education and didn’t really know how to make progress. Eventually I kind of lost focus and never got back to it.

But with the background that I eventually picked up through further courses, I think I could go back and have a lot of fun with this project.

Blogging

I started this blog as a below-the-radar serious of posts, with the intention of just getting myself up to speed on autonomous vehicles.

I showed it to my little brother at one point, and he suggested publishing the posts more widely.

Friends had told me about how great Medium is for blogging, and I’ve been really happy that I moved my writing here.

Of these three projects, blogging is the only one I have kept up since starting my job on Ford’s AV team. It’s fun, it keeps me current on industry news, and it’s nice to get the constant feedback that people are reading and following what I write.

So thank you for that!

The Rolls-Royce Vision 100

I’ve always found Rolls-Royce to be an intriguing car brand, simply because so few people purchase their vehicles.

I ran the math once and figured that Rolls-Royce makes so much money on each vehicle that they can offset the incredibly low volumes and still design amazing cars.

So I was fascinated to read The Verge cover the unveiling of the Rolls-Royce Vision 100 concept car.

They call it “a cruise ship on wheels”.

The RR answer is simply staggering in the extremism of its opulence and swagger. I witnessed it rolling in to the stage here in London this morning, and it felt like I was attending the inauguration of a giant cruise ship. Measuring nearly 20 feet in length (5.9m) and five feet tall, the Vision 100 dwarfs its occupants and nearby attendants in a way that even the grandest present-day Rolls-Royces can’t quite match.

It’s a trip.

How to Land An Autonomous Vehicle Job: Coursework

Recently I outlined a short series of posts I’ll be writing about how I landed a job in autonomous vehicles.

The first part of that equation was coursework.

There are so many free online courses to take!

My background is that I have a pretty solid foundation in software engineering, including an undergraduate degree in computer science. But most recently my programming has been on the web, not so much in the machine learning and embedded systems areas that dominate vehicle software.

Here are the courses I took:

Artificial Intelligence for Robotics (Udacity): This is a terrific and super-fun introduction into self-driving cars by Sebastian Thrun. Thrun is both the founder of Udacity and also the founder of Google’s self-driving car project and also a former professor at Stanford. Taking the class is like being in the presence of greatness.

Machine Learning (Coursera): This class is really broad, covering supervised and unsupervised learning algorithms, as well as optimization and tuning. The teacher is Andrew Ng, who is like Sebastian Thrun’s mirror image — Stanford professor, then founder of Coursera, now head of Baidu’s self-driving car program.

Control of Mobile Robots (Coursera): This course is taught through Coursera’s partnership with Georgia Tech, and covers the basics of control theory. It was especially helpful for me, as a computer science undergrad with minimal background in mechanical engineering.

Deep Learning (Udacity): This is a relatively short overview of the theory behind deep neural networks, with some practical programming exercises.

Deep Learning (NVIDIA): In practice, it’s possible to get a lot of value out of deep neural networks with only a thin understanding of how DNNs actually work. That’s because practitioners can get a lot of mileage out of deep learning frameworks like Caffe, Theano, and Torch. This course provides an overview of each framework, along with programming exercises.

Intro to Parallel Programming with CUDA (Udacity): Deep learning plays a prominent role in autonomous software, and deep learning is itself enabled by the massive parallelization that GPUs offer. CUDA is the parallel programming framework created by NVIDIA, and this course provides great background into how parallel programming works.

Underactuated Robotics (edX): This was by far the most math-heavy of the courses I took, owing to its target audience — MIT upperclassmen. I confess that due to some family obligations I only finished about 2/3 of the course. But the course provides terrific exercises in how to model robots in the physical world. It also forced me to brush up on my advanced math.

All of these are fairly advanced courses. Some of the programming exercises are in C++, some in Python, many in Matlab.

For somebody with minimal software engineering background, I might recommend starting with some more introductory computer science and linear algebra courses.

But for somebody with my background — that is to say, a strong software engineer with no real robotics experience, I found these classes to be terrific.

How to Land an Autonomous Vehicle Job

About eight months ago, I decided to wind down my long-running recruiting assessment business, Candidate Metrics, and move on to a new adventure.

I knew I wanted to get a big win for my career and work in an area that was really exciting. Self-driving cars were a natural fit.

Unfortunately, the web developer + recruiting software salesman + entrepreneur role I had been inhabiting for five years was only marginally relevant to the world of autonomous vehicles.

So I went to work building up the skills and CV to transition myself into autonomous vehicles. This transition had three big parts:

  1. Coursework
  2. Projects
  3. Networking

From start to finish, the whole cycle took almost six months, although I was winding down my old business at one end and finalizing my job offer at the other end, so there were really only three months where this was pretty much my full-time job.

Since there might be other people out there excited about autonomous vehicles but without a master’s degree in robotics, or years of embedded software experience, I’ll spend the next three days diving into each of the line items above.

Also, news seems to be slow in the AV world this week and I need something to write about 😉

Hopefully this will help somebody, though!

Investing in Self-Driving Cars

Rob Toews has a post up on TechCrunch outlining investment opportunities in the autonomous vehicle space.

It serves as a good overview of the OEMs and suppliers involved in the race to launch self-driving cars.

Toews covers several different sensor manufacturers, including those involved in Lidar, cameras, and computer chips. He also reviews software vendors in areas like mapping, machine learning, and security.

There are lots of nits to pick about parts of the ecosystem that he doesn’t cover — Tier 1 suppliers come to mind — but that might have just been due to editorial space constraints. And overall it’s a good overview of the industry.

I’ve been consistently impressed by the ability of the financial press to cover the autonomous vehicle space, and this is another example of their success.

Read the whole thing.

Tesla’s Risk Equation

One of the big dichotomies in the autonomous driving world is between companies targeting Level 3 autonomy and those jumping right to Level 4.

Basically, this is the distinction between vehicles in which the driver has to be ready to take control at any moment (Level 3) and vehicles in which the driver can safely tune out (Level 4).

This situation is made somewhat more confusing than necessary by the fact that the US National Highway Transportation Safety Administration has put out a 6-level autonomy classification chart, and the Society of Automotive Engineers has put out a 5-level chart. In both cases Level 3 has similar, but slightly distinct, definitions.

That is the context for Volvo’s ongoing criticism of Tesla’s autopilot strategy, most recently enunciated by Volvo R&D Chief Peter Mertens in The Drive.

Every time I drive (Autopilot), I’m convinced it’s trying to kill me…Anyone who moves too early is risking the entire autonomous industry.

That last part is an interesting externalities problem. For every autonomous mile driven, in any car, there is a small but non-zero chance of a fatal accident.

There is a risk that a fatal accident, particularly a rather gruesome one, might prompt regulators to clamp down on autonomous vehicle technology and research, across all manufacturers.

In that case, by launching its autonomous technology so aggressively, it is taking a risk for which it reaps the reward but only partially shares in the cost.

I’m not sure that’s an ironclad argument — after all, at some point, some automaker will have to release autonomous technology to the public. So why not Tesla, and why not now?

But a lot of people, most vocally at Volvo, are worried Tesla is doing this too early and too aggressively, and Volvo isn’t happy about the risk Tesla is foisting on the rest of the industry.

Self-Driving Lullabies

My wife just gave birth to our first child (not the child in the image— that’s a stock photo) two weeks ago, and he’s a total joy.

He would be more of a joy, though, if he could sleep through the night. Right now he’s on a nocturnal schedule, sleeping through chunks of the day and screaming all night.

One way to calm him down is to strap him into the car seat and drive him around town.

So last night, between the hours of 4am and 5am, I drove about 15 miles around San Mateo County. It worked.

When I mentioned that to people in the office this morning, one of them said, “Wouldn’t it be great to have a self-driving car for that?”

It would, and it will 🙂