Mercedes-Benz and Udacity

Meet some of the people behind the self-driving car revolution!

Mahni is a little spooked by my sunburn.

The Autonomous Vehicle field is full of amazing personalities — people who possess remarkable technical skills and rarefied knowledge, but who are also supremely creative, and incredibly passionate.

The team from Mercedes-Benz is a perfect example.

They’re one of our core partners for our Self-Driving Car Engineer Nanodegree program, and they’ve done a remarkable job of not just teaching our students technical skills, but giving them with a real sense of purpose and vision. Perhaps most importantly, they have helped to make complex autonomous vehicle concepts accessible to every single student we teach.

I feel fortunate to work with such great people, and I’d like introduce you to some of them right now! Specifically, Axel, Michael, Dominik, Andrei, Maximillian, Tiffany, Tobi, Mahni, Beni, and Emmanuel!

First, meet Axel. In this video, he shares the history of Mercedes-Benz and autonomous vehicle research, and also describes the type of engineers they are hiring today:

In this next video, Dominik, Michael, and Andrei outline the tools the Mercedes-Benz sensor fusion team uses to combine sensor data for tracking objects in the environment:

Next, Maximillian and Tiffany talk about the work they do on the localization team to help the vehicle determine where it is in the world:

Finally, in this video, Tobi, Mahni, Beni, and Emmanuel outline the three phases of path planning. First, the prediction team estimates what the environment will look like in the future. Then, the behavior planning team decides which maneuver to take next, based on those predictions. Lastly, the trajectory generation team builds a safe and comfortable trajectory to execute that maneuver:

Amazing people, right? Are you ready to join them? Then you should apply to join them at Mercedes-Benz, because they’re hiring!

Not quite ready yet? Then apply now for our Self-Driving Car Engineer Nanodegree program! You’ll be joining the next generation of autonomous vehicle experts, and that’s a pretty amazing thing.

The Race to Build Tesla Autopilot

The Wall Street Journal, a publication I read daily and generally quite like, has a recent feature on the drama behind Tesla Autopilot that seems to me a bit unfair.

Indeed, the piece actually quotes Elon Musk saying the same thing:

“In an email, Mr. Musk said he was unhappy with previous Journal articles on the company. “While it is possible that this article could be an exception, that is extremely unlikely, which is why I declined to comment,” he wrote.”

The article dives deep into the internal strife at Tesla over how far and how fast to push Autopilot, Tesla’s suite of advanced driver assistance technologies.

The tone of the piece is that Musk pushed his engineers to release Autopilot beyond its safe capabilities, and as a result many of them objected and ultimately quit.

“Behind the scenes, the Autopilot team has clashed over deadlines and design and marketing decisions, according to more than a dozen people who worked on the project and documents reviewed by The Wall Street Journal. In recent months, the team has lost at least 10 engineers and four top managers — including Mr. Anderson’s [DS: Sterling Anderson was the Director of Autopilot] successor, who lasted less than six months before leaving in June.”

Despite all of the buildup, however, The Journal ultimately fails to make the case that Autopilot was released too aggressively or that it is unsafe.

Both named and unnamed sources are quoted from as early as 2015, stating that Autopilot isn’t ready for hands-free mode and that Musk pushed a product onto the public that wasn’t safe or ready.

And, to be sure, I favor a management style in which the people doing the work get to make the decisions, instead of Musk’s style, which seems to be to dictate decisions for employees to execute.

But Elon Musk has done pretty well for himself and for Tesla, and The Journal isn’t able to dig up any scandals since 2015, except for the one well-known Autopilot crash in Florida.

Tesla Autopilot may be inherently unsafe, and maybe Musk’s push to release it was reckless. Just because nothing’s gone terribly wrong yet doesn’t mean Musk made the right decision. Maybe Tesla’s just been lucky.

But if a newspaper is going to write a hit piece on a technology product, implying that it’s unsafe, it needs to bring more evidence to the table than uncomfortable quotes from engineers who quit.

Self-Driving Path Planning, Brought to You by Udacity Students

Term 3 of the Udacity Self-Driving Car Engineer Nanodegree Program starts with path planning. This is one of the deepest and hardest problems for a self-driving car.

Here are three Udacity student approaches that show the complexity and beauty of path planning.

Reflections on Designing a Virtual Highway Path Planner (Part 1/3)

Mithi

Mithi published a three-part series about what she calls “the most difficult project yet” of the Nanodegree Program. In Part 1, she outlines the goals and constraints of the project, and decides on how to approach the solution. Part 2 covers the architecture of the solution, including the classes Mithi developed and the math for trajectory generation. Part 3 covers implementation, behavior planning, cost functions, and some extra considerations that could be added to improve the planner. This is a great series to review if you’re just starting the project.

“I decided that I should start with a simple model with many simple assumptions and work from there. If the assumption does not work then I will then make my model more complex. I should keep it simple (stupid!).

A programmer should not add functionality until deemed necessary. Always implement things when you actually need them, never when you just foresee that you need them. A famous programmer said that somewhere.

My design principle is, make everything simple if you can get away with it.”

Path Planning in Highways for an Autonomous Vehicle

Mohan Karthik

Mohan takes a different approach to path planning, in which he combines a cost function with a feasibility checklist. He builds a cost function and then ranks each lane by how it does on a cost function. Then he decides whether to move to a lane based on the feasibility checklist.

“This comes down to two things (and I’m going to be specific to highway scenario).

Estimating a score for each lane, to determine the best lane for us to be in (efficiency)

Evaluating the feasibility of moving to that lane in the immediate future (safety & comfort)”

Self-Driving Car Engineer Diary — 11

Andrew Wilkie

The 11th post in Andrew’s series on the Nanodegree Program covers Term 3 broadly and path planning specifically. In particular, Andrew lays out where this path planning project falls in the taxonomy of autonomous driving, and the high-level inputs and outputs of a path planner. This is a great post to review if you’re interested in what a path planner does.

“I found the path planning project challenging, in large part due to fact that we are implementing SAE Level 4 functionality in C++ and the complexity that comes with the interactions required between the various modules.”

These examples make clear the vision, skill, and tenacity our students are applying to even the most difficult challenges, and it’s a real pleasure to share their incredible work. It won’t be long before these talented individuals graduate the program, and begin making significant, real-world contributions to the future of self-driving cars. I know I speak for everyone at Udacity when I say that I’m very excited for the future they’re going to help build!

Intel Studying Human —  Self-Driving Car Interaction

Intel dove into self-driving cars in a big way with their Mobileye acquisition earlier this year. But these big acquisitions take a while to close and even longer to integrate, so in the meantime it’s great to see that Intel is moving forward with autonomous vehicle research at its Chandler, Arizona, test facility.

In particular, Intel reports on a qualitative human-machine interaction study it did on seven “tension points”:

  • Human vs. machine judgment
  • Personalized space vs. lack of assistance
  • Awareness vs. too much information
  • Giving up control of the vehicle vs. gaining new control of the vehicle
  • How it works vs. proof it works
  • Tell me vs. listen to me

Here’s the video:

Headlines from Google, Uber, and Waymo

Two big exclusive scoops and a smaller headline in the autonomous vehicle world today.

Apple Scales Back Its Ambitions for a Self-Driving Car

The New York Times got five sources at the notoriously secretive Apple self-driving car effort (Project Titan) to open up about the successes and failures of the project. It sounds like Apple has gone through similar debates as most other self-driving car efforts (build Level 3 features or jump straight to Level 4? have a steering wheel or not? focus on retrofitting existing vehicles or build a new vehicle from the ground up?).

Things seemed to go sideways for a while, but apparently the project is back on a growth trajectory. It will be exciting to see what Apple eventually launches.

“The car project ran into trouble, said the five people familiar with it, dogged by its size and by the lack of a clearly defined vision of what Apple wanted in a vehicle. Team members complained of shifting priorities and arbitrary or unrealistic deadlines.”

Uber’s self-driving cars hit Toronto streets — in manual mode

Uber has self-driving cars on the streets of Toronto now, although they’re being driven by humans in “mapping mode” for the moment. If Uber does pull the trigger on self-driving mode — which it expects to do later this year — that will give it test vehicles in Pittsburgh, Phoenix, San Francisco, and Toronto, which might be a wider geographic spread than even Waymo.

“The cars aren’t available for rides: they will be conducting mapping tasks. Uber says it hopes to test the cars in autonomous mode by the end of 2017.”

Inside Waymo’s Secret World for Training Self-Driving Cars

The Atlantic scored a big scoop that might justly be titled, “Inside Waymo’s Secret Worlds” [plural].

The first world is Waymo’s physical testing facility at the old Castle Air Force Base, in California’s central valley. The article talks about a city with streets but no buildings, designed specifically for testing self-driving cars. When Waymo runs into a particularly sticky driving situation, they just pave a version of the streets on their test facility and run their cars through that scenario over and over and over again.

“We pull up to a large, two-lane roundabout. In the center, there is a circle of white fencing. “This roundabout was specifically installed after we experienced a multilane roundabout in Austin, Texas,” Villegas says. “We initially had a single-lane roundabout and were like, ‘Oh, we’ve got it. We’ve got it covered.’ And then we encountered a multi-lane and were like, ‘Horse of a different color! Thanks, Texas.’ So, we installed this bad boy.””

The second world is Waymo’s internal simulation engine, named Carcraft. What started as a playback tool for sensor data has morphed into a simulation engine that allows Waymo to “drive” billions of miles per year.

“Once they have the basic structure of a scenario, they can test all the important variations it contains. So, imagine, for a four-way stop, you might want to test the arrival times of the various cars and pedestrians and bicyclists, how long they stop for, how fast they are moving, and whatever else. They simply put in reasonable ranges for those values and then the software creates and runs all the combinations of those scenarios.”

The Udacity Self-Driving Car Team

Over the entire nine month course of the Udacity Self-Driving Car Engineer Nanodegree, only a fraction of the people behind the program ever appear on camera.

There’s myself, of course, and my colleague Ryan Keenan, who taught a number of lessons. A few of my colleagues like Sebastian and Andrew Paster and Andy Brown and Aaron Brown (not related) appear for short cameos.

But there is a small army of colleagues behind the scenes who make everything work. The photo collage above doesn’t even capture everybody.

Here are a few photos I captured recently of the people who make the program happen.

Ryan Keenan (content developer), Justine Lai (producer), and Sebastian Thrun (president) at our final shoot.
Stephen Welch (services lead, then content developer), Brok Bucholtz (content developer), Aaron Brown (content developer), Justine, and me on a foggy day on our retreat at Point Reyes.
Geoff Norman, Justine Lai, Ernesto Molero, Larry Madrigal, and Silver, all working together to produce our final shoot.
Trophies for Justine, me, Caleb Kirksey (self-driving car engineer), and Megan Powell (support representative).
Stephen, Caleb, Aaron Brown, Anthony Navarro (product lead), and Brok at a team dinner.
Jessica Lulovics (program manager), me, Lisbeth Ortega (community manager), Megan, and Justine at a team dinner.
Stephen, Jessica, Caleb, me, Anthony, and Aaron celebrating the launch of our final module, with a cake that Jessica baked.

GM and Lyft and Partnerships

GM and Lyft seem to be heading toward a reckoning, similar to what Google and Uber are experiencing. Minus the allegations of intellectual property theft, at least so far.

Reuters has an article (written by Paul Lienert, a reader of this blog) highlighting the tension between GM’s growing presence in the ridesharing space, on the one hand, and on the other hand GM’s partial ownership, of and partnerships with, Lyft.

On the one hand, GM has invested heavily in Lyft, and holds a 9% ownership stake. GM also benefits from Lyft Express Drive, a Lyft program that leases GM vehicles to Lyft drivers.

On the other hand, GM is launching and expanding a number of programs that are competitive to Lyft.

“Maven can provide GM vehicles directly to ride-sharing drivers who previously leased them through Lyft Express Drive and Uber Vehicle Solutions.”

Similarly, GM’s Cruise subsidiary is beta testing a service called Cruise Anywhere that seems poised to use self-driving cars compete directly with Lyft’s core on-demand transportation service.

Partnerships are tricky, especially because companies’ interests and plans can diverge over time. Scott McNealy famously tweeted:

Ronald Coase won a Nobel Prize in part for theorizing about how ownership affects outcomes. Right now we’re seeing lots of self-driving car companies form partnerships, but I suspect in the future we’ll see many more outright acquisitions. Owning a company, instead of partnering with it, and can help align everyone’s interests.

Clemson University International Center for Automotive Research

I am, of course, very proud of the Self-Driving Car Engineer Nanodegree Program we have built at Udacity, which teaches software engineers to become autonomous vehicle engineers. You should enroll!

But there are other educational institutions out there, as well, and one I keep bumping into is the Clemson University International Center for Automotive Research.

CU-ICAR, as they style themselves, is a graduate school about 40 minutes up the road from the main Clemson campus, and it offers master’s and doctoral degrees in automotive engineering across a number of different specialties.

The 250 acre campus in Greenville, South Carolina, is located nearby BMW’s US manufacturing center in Spartanburg, SC, and is a great example of the type of industry-educational partnerships we engage in at Udacity.

I know very little about the Clemson program directly, and I’ve never been to Greenville, but I keep running into their graduates on autonomous vehicle teams at some of our largest hiring partners, so I thought I’d mention them.

I’ve also run into a few Clemson students who are taking the Self-Driving Car Nanodegree Program, so of course that makes me happy 🙂

Adversarial Traffic Signs

A couple of days ago I wrote about embedding barcodes into traffic signs to help self-driving cars. Several commenters pointed out a recent academic paper in which researchers (Evtimov, et al.) confused a computer vision system into thinking that a stop sign was a 45 mph sign, with just a few pieces of tape.

This appears to be an extension of a property of neural networks that was already known, which is that they can be fooled in surprising ways. This is called an “adversarial” attack.

Here is an example Justin Johnson gave in the fantastic Stanford CS231n class on convolutional neural networks:

Oops.

So it’s no shocker that the computer vision systems for cars, which rely largely on CNNs, can be fooled.

But notice that it’s not obvious how to apply Justin Johnson’s examples above to an actual printed photo of a goldfish in the real world. The examples above only really work if you have a digital photo of a goldfish.

The breakthrough of the Evtimov et al. paper is that they developed an attack algorithm, which they call Robust Physical Perturbations, that allows them to apply this attack to signs in the real world.

So now we are heading down the road of fooling cars into blowing through stop signs. Is the end nigh?

I’m skeptical.

Hackers hardly need to wait until self-driving cars are on the road before they mess with stop signs. It’s easy enough to cause real carnage today just by removing a stop sign. Indeed, this happens already and the people who do it get convicted of manslaughter. (Although note that particular case was overturned on appeal because it wasn’t clear whether the convicts removed the precise stop sign in question, or a different stop sign.)

I don’t see too many hackers messing with street signs, though, presumably because the result is both fleeting and unpredictable, and the cost (jail time) is high.

In fact, self-driving cars seem even less likely than human drivers to be fooled by tampered stop signs. Self-driving cars are likely to have maps and sensors that could override whatever the car’s camera sees.

It’s possible this paper leads to further breakthroughs in adversarial attacks that could cause more problems, but I don’t think this advance by itself is too worrisome.

The Story of Velodyne

Of all the funny stories in the self-driving car world, surely one of the most improbable is the transformation of Velodyne from a subwoofer manufacturer into the world’s premier lidar supplier.

Lidar, an array of lasers, is the key to tracking and understanding the environment around a vehicle, at least until computers get good enough to do this with a camera.

The San Francisco Chronicle has a short writeup of how Dave Hall transformed his audio company into an autonomous sensor company, and I’d love to read the book-length version. It involves the DARPA Grand Challenge and a tinkerer on “the lunatic fringe”. The story is an old-school inventor’s dream.

For now, though, I’m just grateful for Udacity’s two VLP-16 units and our precious HDL-32E.

Also? Velodyne is a Udacity hiring partner.