Waymo Opens To The Public

Last week, Waymo announced it will put 10 of its vehicles on Lyft’s network in Phoenix. Any Lyft user will be able to ride.

This is super-duper exciting! Waymo is several years ahead of everybody else developing self-driving cars, but until now their vehicles have been off-limits to the general public. I see them scooting around Mountain View all the time, but the only way I can get a ride in one is to pull a favor from a friend who works there.

Now Waymos will be open, albeit in very small initial numbers, to anybody in Phoenix, via Lyft’s network.

This announcement also makes Phoenix the second place in the world, alongside Lyft’s partnership with Aptiv in Las Vegas, where a member of the general public can hail a self-driving robotaxi. They still come with safety drivers, but it’s nonetheless a big step forward.

Literature Review: Capsule Networks

My Udacity colleague, Cezanne Camacho, is preparing a presentation on capsule networks and gave a draft version in the office today. Cezanne is a terrific engineer and teacher, and she’s already written a great blog post on capsule networks, and she graciously allowed me to share some of that here.

Capsule networks come from a 2017 paper by Sara Sabour, Nicholas Frosst, and Geoffrey Hinton at Google: “Dynamic Routing Between Capsules”. Hinton, in particular, is one of the world’s foremost authorities on neural networks.

As my colleague, Cezanne, writes on her blog:

Capsule Networks provide a way to detect parts of objects in an image and represent spatial relationships between those parts. This means that capsule networks are able to recognize the same object in a variety of different poses even if they have not seen that pose in training data.

Love the Pacman GIF. Did I mention Cezanne is also an artist?

Cezanne explains that a “capsule” encompasses features that make up a piece of an image. Think of an image of a face, for example, and imagine capsules that capture each eye, and the nose, and the mouth.

These capsules organize into a tree structure. Larger structures, like a face, would be parent nodes in the tree, and smaller structures would be child nodes.

“In the example below, you can see how the parts of a face (eyes, nose, mouth, etc.) might be recognized in leaf nodes and then combined to form a more complete face part in parent nodes.”

“Dynamic routing” plays a role in capsule networks:

“Dynamic routing is a process for finding the best connections between the output of one capsule and the inputs of the next layer of capsules. It allows capsules to communicate with each other and determine how data moves through them, according to real-time changes in the network inputs and outputs!”

Dynamic routing is ultimately implemented via an iterative routing process that Cezanne does a really nice job describing, along with the accompanying math, in her blog post.

Capsule networks seem to do well with image classification on a few datasets, but they haven’t been widely deployed yet because they are slow to train.

In case you’d like to play with capsule networks yourself, Cezanne also published a Jupyter notebook with her PyTorch implementation of the Sabour, Frosst, and Hinton paper!

NXP Earnings And The Future Of Automotive Manufacturing

I spent a few hours this morning racing down the rabbit hole of NXP’s Q1 2019 earnings call, which I wrote up for Forbes.com:

“The transcript highlights, in particular, the distinction between NXP’s traditional automotive semiconductor business, which declined, and its advanced driver assistance systems (ADAS) and battery management systems (BMS), both of which grew dramatically, albeit from small bases.”

NXP is kind of like the automotive industry in miniature: vehicle sales are declining today, causing decreases in revenue associated with traditional automotive manufacturing. But in the not-so-distant future, mobility will change and new products, like advanced driver assistance systems and battery management systems, will grow quickly.

Read the whole thing.

And I should also mention that my Forbes.com editor, Alan Ohnsman, has recruited a terrific stable of automotive writers. The daily output of the Forbes.com transportation section is voluminous. Just in the last day you can read about shadow testing at Tesla, Ford’s Q1 earnings, the effect of self-driving cars on the automotive repair market, Ford’s connected vehicles efforts, GM’s upcoming electric pickup truck, Tesla’s cash crunch, Ford’s investment in Rivian, and Waymo’s lidar units.

Self-Driving Car Ethics

My Udacity colleague Vienna Harvey sat down with Australian podcaster Zoe Eather to discuss the role of both ethics and education as they relate to self-driving cars. It’s a fun episode 🙂

This interview is part of Zoe’s Smart Community podcast, which covers everything from infrastructure, to data, to climate change, to mobility.

Prior to Vienna’s interview, I got to take Zoe for a spin in Carla, Udacity’s self-driving car. Zoe was delightful and I think you’ll enjoy listening to her and Vienna geek out about self-driving cars.

AI Robotic Racing

Lockheed Martin, NVIDIA, and the Drone Racing League have partnered to create AIRR: Artificial Intelligence Robotic Racing.

The gist seems to be that the top team to build an autonomous racing drone will win $1 million, and there is an addition $250,000 available to whoever can first beat a human drone pilot in a race.

I had not known about the Drone Racing League, but the videos look pretty cool.

As a person without much hand-eye coordination, it kind of blows my mind that pilots can navigate these types of environments.

On the other hand, as a person with limited hand-eye coordination, it seems like computers would be much better at this than me. Maybe than everybody?

Bjarne Stroustrup on 40 of C++

Recently I sat down with Bjarne Stroustrup, the creator of C++, to discuss his career and the evolution of C++ over years.

We discussed Bjarne’s origins in Denmark, his PhD work at Cambridge, the origins of C++ at Bell Labs, how to teach C++, the ISO committee that governs C++, and what exactly made Bjarne’s career so successful. There’s a lot more, too 😀

Watch the interview here.

And if you are interested in learning C++ from Bjarne (and me, and many other instructors), enroll in Udacity’s C++ Nanodegree Program!

Waymo Goes (Just A Little) Big In Michigan

Waymo just announced that it has decided on a facility in Detroit to modify its fleet of Chrysler and Jaguar Land Rover vehicles into self-driving cars.

MarketWatch reports that Waymo will create 400 jobs at the site, which is meaningful, but also not game-changing. This seems primarily like an expansion of Waymo’s existing facility in nearby Novi, Michigan. The goal is probably to do the same type of work on more vehicles, not to fundamentally expand the scope of operation.

By all appearances, Waymo purchases what are essentially off-the-shelf Chrysler Pacifica and Jaguar I-PACE vehicles, and bring them to this facility to convert them into autonomous vehicles.

I might imagine there are a lot of similarities between the work Waymo does in Michigan and the work AutonomouStuff has been doing in Peoria, Illinois, for years. To become a self-driving car, an off-the-shelf vehicle needs augmented power supplies, new computers, a lot more sensors, and a substantial amount of wiring.

That takes a lot of work, especially if Waymo plans to do that for tens of thousands of vehicles.

However, Waymo does not appear to be building out a manufacturing plant to build the vehicles themselves. Maybe things will head in that direction eventually, but I’d bet not.

There has been a lot of speculation that the automotive industry will start to look something like the airline industry. Ridesharing companies will purchase vehicles from manufacturers Chrysler, the same way airlines purchase airplanes from manufacturers like Boeing. Then the ridesharing company or airlines outfits the vehicles or airplane to their specification. The latest Waymo news feels like a step in that direction.

Jupyter Graffiti Brings Screencasts and Terminals and More

Jupyter Notebooks are terrific and highly interactive tools that are extremely popular for both publishing data science results and for teaching concepts.

Udacity uses Jupyter extensively, particularly for teaching machine learning and data science.

We love Jupyter so much, in fact, that Udacity engineers have developed a set of enhancements for Jupyter called, “Graffiti”. Graffiti allows Udacity instructors to record screencasts, mouseovers, and audio walkthroughs of code. Those features get embedded directly into our Jupyter notebooks.

In Udacity’s C++ Nanodegree Program, we use Graffiti to add terminals to Jupyter notebooks, so that we can compile, run, and debug C++ programs from within Jupyter Notebooks. It’s really pretty neat.

My Udacity colleague, Will Kessler, has written a blog post about Graffiti and how it enhances Jupyter Notebooks to improve the learning experience.

Also: Udacity has released Graffiti as an open-source project!

Apple Lidar: Designed In California, Built…Somewhere

CNBC reports that Apple is in discussions with “at least four companies as possible suppliers for next-generation lidar sensors in self-driving cars.”

The report also suggests that, “The iPhone maker is setting a high bar with demands for a ‘revolutionary design.’…In addition to evaluating potential outside suppliers, Apple is believed to have its own internal lidar sensor under development.”

Waymo managed to pull off this trick with its Laser Bear Honeycomb lidar, designed in-house and the subject of pretty intense litigation with Uber.

If anything, Apple’s hardware design strengths should make this an even easier task for Apple than for Waymo, so it seems totally plausible Apple could pull this off.

The question is: to what end?

I know very little about why Waymo started designing its own lidar, but I know they started building self-driving cars with the Velodyne HDL-64 “chicken bucket” model.

My guess is that Google began developing their own lidar several years ago not because they needed a much better sensor, but rather because they couldn’t get enough sensors of any type.

Several years ago, when Google would have begun developing its lidar program, Velodyne was one of the only lidar manufacturers in the world. And even Velodyne was severely constrained in the number of units it could produce. There was a period a few years ago when the waiting list to buy a Velodyne lidar unit was months long.

In that world, it would have made a lot of sense for Google to begin developing its own lidar program. That would’ve reduced on possible bottleneck for building self-driving cars at scale.

Fast-forward to 2019. Velodyne has taken massive investment capital to build lidar factories, and there are upwards of sixty lidar companies (mostly startups) developing sensors. Today, there isn’t the same need or urgency to develop custom lidar units. In fact, all of those lidar startups are basically doing that on their own.

So it’s not totally clear to me what Apple would gain from creating their own lidar program.

Volkswagen Is Testing Real Driving Conditions In Hamburg

Volkswagen announced it is testing (present tense) self-driving cars in Hamburg. The press release details that there are five self-driving e-Golfs testing on a three kilometer stretch of road in Hamburg.

This would be a minor announcement in the US, where a number of different companies are testing fleets of this size (or bigger) within geofences of this size (or bigger). But surprisingly little testing has happened on public roads in Germany, so it is terrific to see Volkswagen take this step. This might actually be the first major test I can recall in that country.

That said, the press release is a little coy on the exact setup. While the scenario is described as “real driving conditions”, the test is also said to be taking place in a special autonomous vehicle “test bed” that is still under construction.

My sense is that this test is probably not on truly “public” roads that any regular driver might pass through. That said, it seems like a good precursor to that kind of test.

This is the first time Volkswagen has begun to test automated driving to Level 4 at real driving conditions in a major German city. From now, a fleet of five e-Golf, equipped with laser scanners, cameras, ultrasonic sensors and radars, will drive on a three-kilometer section of the digital test bed for automated and connected driving in the Hanseatic city.

The press release does have some interesting and specific details about the vehicles themselves:

“The e-Golf configured by Volkswagen Group Research have eleven laser scanners, seven radars and 14 cameras. Up to 5 gigabytes of data are communicated per minute during the regular test drives, each of which lasts several hours. Computing power equivalent to some 15 laptops is tucked away in the trunk of the e-Golf.”