Vehicle Innovations Challenge

A couple of professors from the University of Pennsylvania, John Paul MacDuffie and Rahul Kapoor, are running a “wisdom of the crowds” project about the future of the automotive industry. It’s called the “2017–2018 Vehicle Innovations Challenge”.

It’s fun and stretched my mind a little bit about where the car industry is going over the next year.

They ask nine questions, which you can see on the homepage. Anybody (including you!) can sign up and participate for free.

And if you’re interested, you can review their wisdom of the crowds challenge from last year and remind yourself what the hot questions were on everybody’s mind a year ago.

Go sign up for this year’s challenge and throw down your votes.

Waymo Safety Report

Waymo recently released a Safety Report, that explains how they test and validate their self-driving cars.

On the one hand, it is awesome that Waymo wrote and released this. On the other hand, it’s not obvious who the audience is.

It’s clearly a marketing document. It even feels a little bit like something you’d pick up in a new car showroom. The pages have snazzy designs and pleasing fonts and graphics. Many of the 43 pages are given over to just a handful of statistics.

It’s also definitely a one-time document, as compared to the monthly safety report cards Google used to distribute as part of the Self-Driving Car Project. There is nothing in this new Safety Report that is calendar-specific.

That said, there is a wealth of high-level information in the report. Waymo goes into some detail on its testing processes and the scenarios that it tests. Not enough detail to be useful to engineers hoping to replicate Waymo’s processes, but enough to reassure the general public that Waymo has indeed thought this through.

Several sections explain how Waymo’s self-driving cars work, one section breaks down the ways in which Waymo analyzes safety (behavioral safety, functional safety, crash safety, operational safety, and non-collision safety).

But compared to the academic paper recently published by Mobileye, “On a Formal Model of Safe and Scalable Self-driving Cars”, Waymo’s report is aimed much more toward journalists and regulators and I suppose whoever in the general public is likely to flip through 43 pages of safety reporting.

My main takeaway from this is that Waymo must be inching ever closer to a public rollout of their vehicles. This is the type of document that they can send to new users, who will feel better having 43 pages of safety text, even if most of them never actually read it.

And if that’s the case, thank goodness for small steps toward a much better future.

Automotive Offices in Silicon Valley

This map of Silicon Valley appeared in Computerworld two years ago. Now there are even more automotive companies in the Bay Area.

One topic that has come up now and again for me is the intersection of Silicon Valley and the automotive industry. There are a lot of angles to this topic, but one thing I have generally been impressed by is how traditional automotive companies run their Silicon Valley offices.

There are a lot of obstacles to overcome: cost of living, new employees, veteran employees, communication with headquarters, division of labor across teams, multi-time-zone meetings.

The companies I’ve seen that do this well — and a surprising number of them seem to do it well — have the right mix of veteran managers and younger line engineers.

It’s hard to say exactly what that mix is, and it can vary considerably depending on the purpose of the office. Small offices focused primarily on technology scouting can tilt pretty far toward veteran managers on rotation from headquarters. Larger offices that are performing significant engineering work in Silicon Valley often succeed with a larger share of actual Silicon Valley engineers on the payroll.

The veteran managers provide access to the internal corporate social networks and informal power structures that facilitate progress in any organization.

The Silicon Valley engineers provide some of the raw engineering talent for which the Valley is famous, and perhaps also access to, “how Silicon Valley works”. “This is how we solved this problem at my last startup,” for example.

Another surprise is the number of junior engineers who went to school outside of the Bay Area, were hired by a traditional automotive company, and then shipped straight to California. These engineers have some of the attributes of veteran employees — their current automotive employer is the only employer they’ve ever known — and some attributes of traditional Silicon Valley engineers — youth and migration and audacity.

If I had to stick my finger in the wind and call a number, I’d say maybe 1/3 veteran managers and 2/3 Silicon Valley engineers (also marketers, business development, etc.) is the right mix, but it would interesting to have firmer numbers on this. It also seems like a good case study for a business school.

Didi Challenge Finalists

Last spring Udacity partnered with Didi to release datasets from Udacity’s self-driving car and test how well groups of engineers from around the world could track vehicles.

This Udacity-Didi Challenge was a big effort for us at Udacity — in terms of gathering data. It was also a huge effort for teams of students worldwide, who tackled cutting-edge research challenges in an effort to win the $100,000 prize.

We pared the teams of entrants down to five, based on their accuracy a vehicle and pedestrian tracking. Those five teams presented to a panel of judges at Udacity. Their work was incredibly impressive and a real testament to the ability of people from around the world to contribute to autonomous vehicle engineering.

Here are their presentations:

https://www.slideshare.net/DavidSilver2/udacitydidi-challenge-finalists

Cool Projects from Udacity Students

I have a pretty awesome backlog of blog posts from Udacity Self-Driving Car students, partly because they’re doing awesome things and partly because I fell behind on reviewing them for a bit.

Here are five that look pretty neat.

Visualizing lidar data

Alex Staravoitau

https://navoshta.com/

Alex visualizes lidar data from the canonical KITTI dataset with just a few simple Python commands. This is a great blog post if you’re looking to get started with point cloud files.

“A lidar operates by streaming a laser beam at high frequencies, generating a 3D point cloud as an output in realtime. We are going to use a couple of dependencies to work with the point cloud presented in the KITTI dataset: apart from the familiar toolset of numpy and matplotlib we will use pykitti. In order to make tracklets parsing math easier we will use a couple of methods originally implemented by Christian Herdtweck that I have updated for Python 3, you can find them in source/parseTrackletXML.py in the project repo.”

TensorFlow with GPU on your Mac

Darien Martinez

The most popular laptop among Silicon Valley software developers is the Macbook Pro. The current version of the Macbook Pro, however, does not include an NVIDIA GPU, which restricts its ability to use CUDA and cuDNN, NVIDIA’s tools for accelerating deep learning. However, older Macbook Pro machines do have NVIDIA GPUs. Darien’s tutorial shows you how to take advantage of this, if you do have an older Macbook Pro.

“Nevertheless, I could see great improvements on performance by using GPUs in my experiments. It worth trying to have it done locally if you have the hardware already. This article will describe the process of setting up CUDA and TensorFlow with GPU support on a Conda environment. It doesn’t mean this is the only way to do it, but I just want to let it rest somewhere I could find it if I needed in the future, and also share it to help anybody else with the same objective. And the journey begins!”

(Part 1) Generating Anchor boxes for Yolo-like network for vehicle detection using KITTI dataset.

Vivek Yadav

Vivek is constantly posting super-cool things he’s done with deep neural networks. In this post, he applies YOLOv2 to the KITTI dataset. He does a really nice job going through the process of how he prepares the data and selects his parameters, too.

“In this post, I covered the concept of generating candidate anchor boxes from bounding box data, and then assigning them to the ground truth boxes. The anchor boxes or templates are computed using K-means clustering with intersection over union (IOU) as the distance measure. The anchors thus computed do not ignore smaller boxes, and ensure that the resulting anchors ensure high IOU between ground truth boxes. In generating the target for training, these anchor boxes are assigned or are responsible for predicting one ground truth bounding box. The anchor box that gives highest IOU with the ground truth data when located at its center is responsible for predicting that ground truth label. The location of the anchor box is the center of the grid cell within which the ground truth box falls.”

Building a Bayesian deep learning classifier

Kyle Dorman

“Illustrating the difference between aleatoric and epistemic uncertainty for semantic segmentation. You can notice that aleatoric uncertainty captures object boundaries where labels are noisy. The bottom row shows a failure case of the segmentation model, when the model is unfamiliar with the footpath, and the corresponding increased epistemic uncertainty.” link

This post is kind of a tour de force in investigating the links between probability, deep learning, and epistemology. Kyle is basically replicating and summarizing the work of Cambridge researchers who are trying to merge Bayesian probability with deep learning learning. It’s long, and it will take a few passes through to grasp everything here, but I am interested in Kyle’s assertion that this is a path to merge deep learning and Kalman filters.

“Self driving cars use a powerful technique called Kalman filters to track objects. Kalman filters combine a series of measurement data containing statistical noise and produce estimates that tend to be more accurate than any single measurement. Traditional deep learning models are not able to contribute to Kalman filters because they only predict an outcome and do not include an uncertainty term. In theory, Bayesian deep learning models could contribute to Kalman filter tracking.”

Build your own self driving (toy) car

Bogdan Djukic

Bogdon started off with the now-standard Donkey Car instructions, and actually got ROS running!

“I decided to go for Robotic Operating System (ROS) for the setup as middle-ware between Deep learning based auto-pilot and hardware. It was a steep learning curve, but it totally paid off in the end in terms of size of the complete code base for the project.”

Autonomous Vehicles are Power Hungry

Automotive News highlights a problem that we thought a lot about during my time at Ford: the power consumption of autonomous vehicles.

Some of today’s prototypes for fully autonomous systems consume 2 to 4 kilowatts of electricity — the equivalent of having 50 to 100 laptops continuously running in the trunk, according to BorgWarner Inc.

That has huge implications for fuel economy:

The autonomous features on a Level 4 or 5 vehicle, which can operate without human intervention, devour so much power that it makes meeting fuel economy and carbon emissions targets 5 to 10 percent harder, according to Chris Thomas, BorgWarner’s chief technology officer.
…
“They’re worried about one watt, and now you’re adding a couple thousand,” Thomas said. “It’s not trivial.”

I would bet that a fair bit of what NVIDIA is building with its Pegasus units, and what Tesla is working on with AMD, and what Waymo is working on with Intel, is getting the required computational speed at acceptable power consumption levels.

Automotive News hypothesizes that the solution may lie, at least initially, with plug-in hybrids:

“If you are trying to maximize your utilization” of an autonomous vehicle, a battery-electric car “is really restrictive for your business,” Jim Farley, Ford’s president of global markets, told investors on Oct. 3. He said Ford believes hybrids are “the right tech to start with.”

As the owner and driver of a plug-in hybrid Ford C-MAX Energi, I can say with some authority that the fuel efficiency of an electric vehicle paired with the range of gasoline is great.

Hardware News

NVIDIA CEO Jensen Huang took the stage at GTC Europe, in Munich, to announce many things. One thing he announced is the newest member of the DRIVE PX family.

DRIVE PX is NVIDIA’s automotive computational platform, and the newest member is DRIVE PX Pegasus.

From NVIDIA’s website:

“NVIDIA DRIVE PX Pegasus is powered by four high-performance AI processors. It couples two of NVIDIA’s newest Xavier system-on-a-chip processors — featuring an embedded GPU based on the NVIDIA Volta architecture — with two next-generation discrete GPUs with hardware created for accelerating deep learning and computer vision algorithms. The system will provide the enormous computational capability for fully autonomous vehicles in a computer the size of a license plate, drastically reducing energy consumption and cost.

Pegasus is designed for ASIL D certification — the industry’s highest safety level — with automotive inputs/outputs, including CAN (controller area network), Flexray, 16 dedicated high-speed sensor inputs for camera, radar, lidar and ultrasonics, plus multiple 10Gbit Ethernet connectors. Its combined memory bandwidth exceeds 1 terabyte per second.”

The Voltas have a reputation for being blazing fast, so it’s exciting to see them make their way onto automotive hardware.


In other hardware news, Velodyne is increasing their lidar production capacity by 4x. This is all driven by autonomous vehicle demand.

In practical terms, this means it is now possible to purchase a Velodyne lidar and get it more or less immediately. When we ordered our Velodyne HDL-32E in the spring, we had to wait several months to get our unit.

Small steps toward a much better world.

Automotive Manufacturers and Lidar

GM just purchased a lidar startup called Strobe that I had never heard of before. Strobe has flown well below the radar, but GM Cruise CEO Kyle Vogt says that they have compressed their lidar down to a chip that fits in one hand.

It is interesting that lidar is increasingly becoming a competitive differentiator between self-driving car companies. Lidar, in fact, is the basis for the lawsuit between Waymo and Uber.

Here’s where I think a few top autonomous vehicle companies are with lidar:

Waymo: They appear to be building their own lidar. They’re also suing Uber over theft of lidar documents.

Tesla: Elon Musk famously believes lidar is not necessary for self-driving cars.

Uber: Photos indicate that Uber ATG self-driving cars are mounted with something that looks like a Velodyne HDL-32E.

GM Cruise: They just bought Strobe.

Ford: Invested in Velodyne.

Baidu: Invested in Velodyne.

Toyota: They’re using Luminar units in their recently unveiled prototype vehicles.

TEDx Wilmington

On Tuesday, October 17th, I will be giving a TED talk at TEDx Wilmington’s Transportation Salon! Come by to see me and listen to some other cool speakers.

There will be talks on autonomous vehicle technology, connected cars, transportation regulation and privacy, and big data for transportation, among other topics.

The title of my talk will be “How to Program a Self-Driving Car”, and I will walk through engineers program self-driving cars, through the lens of student projects from the Udacity Self-Driving Car Engineer Nanodegree Program.

Should be fun!