Hello, Voyage!

Today is my first day as a motion control engineer at Voyage. I’m so excited!

Voyage came into being years ago as part of Udacity. Oliver Cameron, Voyage’s co-founder and CEO, was my first manager at Udacity. The rest of Voyage’s founding team were my colleagues when I joined Udacity in 2016. I’m thrilled to join them again to work on self-driving cars.

Over the past four years, I have been so impressed by Voyage’s progress. They are now on their third-generation vehicle, and they are already testing a fully driverless autonomous stack.

My role at Voyage will be on the motion control team, which handles steering, acceleration, and deceleration. It’s the “act” part of the “sense-plan-act” robotics cycle. This should be a lot of fun!

One of the most attractive aspects of joining Voyage was the ability to make a big impact on a lot of different parts of the autonomy stack, and I hope to work on many different components over time. Keep an eye out!

In the meantime:

  1. Join me at Voyage!
  2. Check out Voyage in action 🙂

Monarch: A Fully Driverless Tractor

I had not heard of Monarch Tractor until I stumbled upon a KPIX report of an Bay Area winery buying its first fully driverless tractor.

https://cbsloc.al/3ncHvYA

This fall has been big for driverless launches: first Waymo, then Cruise, and now a bunch more. The fact that a slow tractor feels like no big deal is a testament to the quiet progress the industry has made recently.

Still a pretty big deal.

Farewell, Udacity!

After four and a half years, today is my last day at Udacity. On Monday, I will return to my roots in core self-driving car engineering. I’m excited!

Udacity has been the most successful and fun experience of professional life. I leave with memories of amazing students, terrific colleagues, and work of which I am proud.

I am so grateful to Sebastian Thrun and the Udacity team for recruiting me here in 2016. Together we built the Self-Driving Car Engineer Nanodegree Program, which has trained thousands of autonomous vehicle engineers, along many other amazing programs an courses, ranging from artificial intelligence to data science to web development to cloud computing, and beyond.

This small collection of photos captures a few of my many wonderful experiences with this amazing company.

The 2016 SDC Pre-Launch Dashboard!
Launching at TechCrunch Disrupt 2016!
Finding Lane Lines — the first Self-Driving Car Project
Ryan Keenan building Self-Driving Car projects
The first Self-Driving Car Team Retreat in Pajaro Dunes
Meeting Udacity students in Detroit
We won the first Udaciward!
Meeting Udacity students in Tokyo
Udaciward Outing: NASCAR in Sonoma
Teaching with Lufthansa’s FlyingLab at 30,000 feet!
Filming the final video!
We finished the Self-Driving Car Engineer Nanodegree Program!
Graduation!
Brok and the team went crazy for my birthday!
Autonomous Day at the Porsche Experience Center
Interviewing Sebastian Thrun for Udacity Talks
Working with the Infosys self-driving golf cart in Mysore, India!
Filming with the Baidu Apollo team
Teaching self-driving cars at the Navimotive Conference in Ukraine!
Presenting at NIO House in Hangzhou
South by Southwest!
The last School of Autonomous Systems Team Retreat, in San Francisco!
Interviewing C++ creator Bjarne Stroustrup
Live Teaching Samples!
Curriculum Team Q4 2019 Retreat in the redwoods
The Curriculum Team escaped!
Super Chris Vasquez!
We completed Los Pollos Hermanos Employee Training!
Ask Me Anything!
Farewell Karaoke!

Udacity is full of such wonderful people! My colleagues made me an amazing farewell video 🙂

I’m a little self-conscious about sharing it, because it’s hardly modest. But the video is a tour in and of itself through my time at Udacity, and it makes me so happy and proud.

If you pay attention, you can even get some hints about what I’ll be up to next 😉

Waymo’s Safety Report

Waymo has just published a lot of information about the safety and validation of its sytems — more than I have yet reviewed. At the top level is a blog post in which Waymo breaks its safety analysi s into three parts:

  • Hardware
  • Software
  • Operations

Within each of those parts is a fair bit more detail and structure, more than I have seen in the past. For example, regarding hardware:

A vehicle equipped with the Waymo Driver has four main subsystems, which form the ‘hardware layer’. This includes the vehicle itself; the systems used for steering and driving; the sensor suite built into the vehicle; and the computational platform used to run our software.

Undergirding these descriptions are three documents:

  • Safety Report. This is 48 pages of glossy material that seems similar to material Waymo has published in the past. There’s a lot of data, but the audience seems to be more for the public and policymakers, rather than engineers and analysts.
  • Safety Methodologies and Safety Readiness Determinations. This looks neat. Lots more detail on the three layers of Waymo’s stack (hardware, software, operations). Lighter detail on how Waymo determines the safety readiness of the layers.
  • Waymo Public Road Safety Performance Data. Academic-style analysis of Waymo self-driving data from the Phoenix metro area in 2019. Unsurprisingly, the collisions recorded tend to be the fault of human drivers in other vehicles, not Waymo AVs. 
    This sentence caught my eye: “There were 47 contact events that occurred over this time period, consisting of 18 actual and 29 simulated contact events, none of which would be expected to result in severe or life-threatening injuries.”

I’m excited to read these documents over the coming days and see what they reveal. As Waymo writes in the blog post:

“There is currently no universally accepted approach for evaluating the safety of autonomous vehicles — despite the efforts of policymakers, researchers and companies building fully autonomous technologies.”

Asked Me Anything

Today I had the privilege of taking over Udacity’s Twitter account to host a 60-minute AMA on robotics and artificial intelligence. Some blasts from my Udacity past even made an appearance 🙂

It was fun 🙂

ADAS Lidar

New cars with camera systems attached to the windshield have become commonplace. Just about every automotive brand now has a front-camera-enabled ADAS system, optional or even standard on their vehicles.

New cars with lidar systems are virtually non-existent.

There are several reasons for this, starting with cost: cameras are cheaper than lasers, all the more so because they are widely mass-produced for applications beyond automotive. Camera data is also easier for engineers to work with than lidar data, and it’s probably more helpful than lidar data.

That said, lidar data is still very helpful, particularly as a complement to camera data.

Velodyne’s H800 lidar unit, which will go into production in 2021, is designed for ADAS, and looks that way. The form factor itself appears designed to plug into a windshield.

At $500, the price point is still high. I assume that $500 becomes more like $5000 by the time the price of the whole lidar-equipped system gets sold to a car buyer. But even a $500 unit / $5000 system is in the realm of feasibility for consumers, whereas a bottom-of-the-line VLP-16 that retailed for $4,000 probably wasn’t realistic.

In an ADAS system, these sensors should facilitate features like automatic lane-switching, route adjustments, and even fleet mapping, which could lead to ever-greater autonomy.

Lasers, coming to a car near you in 2021.

Is Anybody Big Enough to Acquir

Kirsten Korosec has a blockbuster story in TechCrunch about Uber’s efforts to spin off its Advanced Technology Group (ATG). This marks the latest twist in the too-crazy-even-for-Hollywood saga of Uber’s self-driving unit.

Korosec reports that Uber is in discussions to sell ATG to Aurora, although Korosec notes that:

“Even with the expected depletion in Uber ATG’s valuation, it would be seemingly out-of-range for Aurora unless it was able to secure additional outside investment or structure the deal in a way that would allow Uber to keep some equity.”

Uber ATG has twice as many employees as Aurora, further raising the question of who would be acquiring whom.

ATG has been through all sorts of drama — starting with controversy related to hiring away a large portion of Carnegie Mellon University’s robotics team, to subsequently acquiring Otto, a startup headed by former leaders of what is now Waymo (and was then called the Google Self-Driving Car Project). Google sued Uber and Otto founder Anthony Levandowski, in particular, settling just as the case went to trial. That particular controversy was a factor in the ouster of Uber founder and CEO Travis Kalanick.

The biggest ATG crisis was the fatal collision with pedestrian Elaine Herzberg in Phoenix, Arizona, in early 2018. That collision prompted ATG to halt all on-road autonomous vehicle testing for months, and cast a shadow over the whole industry.

I have always been impressed by the caliber of the ATG team, particularly the individuals who worked with Udacity to build our Self-Driving Car Engineer Nanodegree Program.

As Korosec’s article points out, Uber is trying to divest non-core groups across the board, including its JUMP bicycle division and several international affiliates. ATG is part of that effort. But, at 1200 employees, ATG may be the single largest self-driving car organization in the world right now — perhaps only Cruise and Waymo are of comparable size. Which raises all sorts of questions about who might be able to or interested in absorbing all that talent.

Big Day for ADAS

Two big ADAS announcements today!

  • Honda announced they will mass produce the world’s first SAE Level 3-capable vehicle.
  • Rivian posted specs and features for their upcoming electric light trucks, including the news that Driver+ will come standard on every vehicle.

Some caveats are in order.

Honda’s announcement seems to hinge closely on a recent approval from the Japanese government. Whether this level 3 system will be available outside Japan is unclear.

Audi launched an A8 model several years ago that was L3-capable. But they never enabled Level 3 autonomy, due to regulatory concerns.

Rivian’s Driver+, meanwhile, garnered comparison to GM SuperCruise, which is among the best such systems in the industry.

“With the system engaged, your Rivian will automatically steer, adjust speed, and change lanes on your command. Enabled on select highways at launch, more road types will be introduced through over-the-air updates. Like all driver assistance systems available today, Driver+ requires your full attention on the road at all times and you should not use a hand-held device behind the wheel.”

That’s a big claim! I’m excited to see some reviews of the system.

Big Money for Autonomous Vehicles, Again

After a year or more spent in the “trough of disillusionment”, autonomous vehicles seem to be riding high again. Within a few weeks, Nuro has raised $500 million and Pony.ai has raised $400 million. Luminar also continues its path towards a SPAC merger that would raise $400 million at a $3.4 billion valuation.

My old business school teacher, Mark Leslie, used to preach that for startups, “cash is oxygen, and oxygen is life.” It’s good to see that there is still lots of life in autonomous vehicles.

Lyft Level 5’s Machine Learning Pipeline

Lyft Level 5 recently published an amazing overview of their PyTorch-based machine learning pipeline.

They confess that a few years ago, their development process was slow:

“Our first production models were taking days to train due to increases in data and model sizes.…Our initial deployment process was complex and model developers had to jump through many hoops to turn a trained model into a “AV-ready” deployable model….We saw from low GPU and CPU utilization that our initial training framework wasn’t able to completely utilize the hardware.”

The post proceeds to described the new pipeline Lyft built to overcome these obstacles. They started with a proof-of-concept for lidar point cloud segmentation, and then grew that into a production system.

The pipeline accomplishes a lot of infrastructure wins.

Testing. The pipeline incorporates continuous integration testing, both to ensure that the models don’t regress, and also to verify that the code researchers write will run in the PyTorch-based vehicle environment.

Containerization. Lyft invested in a uniform container environment, to mitigate the distinction between local and cloud model training.

Deployment. The system relies heavily on LibTorch and TorchScript for deployment to the vehicle’s C++ runtime. Depending on existing libraries reduces the amount of custom code Lyft’s team needs to write.

Distributed Training. PyTorch provides a fair bit of built-in support for distributed training across GPU clusters.

There’s a lot more in the post. It’s a pretty rare glimpse of a machine learning team’s internal infrastructure, so check it out!