The Six NVIDIA Xavier Processors

NVIDIA’s Xavier system on a chip (SoC) for self-driving cars recently passed TÜV ISO 26262 functional safety testing. Reading NVIDIA’s blog post on this achievement, I was struck by just how many specialized processors Xavier has, many of which were new to me.

Also, did you know there exists a site called Wikichip?

GPU
Of course an NVIDIA SoC will have a GPU, in this case a Volta GPU. The Volta GPU on the Xavier is optimized for inference. That means the neural network is probably going to be trained somewhere else and then loaded onto this platform when it’s ready for production deployment.

Wikichip lists this GPU at 22.6 tera-operations per second (TOPS). For comparison, Tesla Motor’s purpose-built self-driving chip boasts 36 TOPS. I confess I don’t know enough about just how far to the redline these chips go to understand whether 23 TOPS vs. 36 TOPS is basically the same thing or wildly different.

CPU
Although NVIDIA is a GPU company, the Xavier has a CPU. The CPU has 8 Carmel cores. I assume it’s fast.

VPU
Xavier includes a vision processing unit (VPU), which makes sense for a SoC designed for lots of cameras.

NVIDIA sometimes calls this a “Stereo/Optical Flow accelerator.” Optical flow is a machine learning technique for inferring data (distance, velocity) from stereo cameras. I assume more generally the goal is to accelerate machine learning algorithms on sequential frames of video.

ISP
I had not before heard of image signal processors. Like a VPU, an ISP is designed to accelerate the performance of algorithms on camera data. ISPs seem to focus on individual high-resolution frames, probably for classification tasks on things like signs.

PVA
Vision is clearly a strength of the Xavier. The programmable vision accelerator is an NVIDIA proprietary technology. The best documentation I could find is a patent that seems to focus on collapsing multiple loops into a single loop in order to accelerate vision calculations.

The “programmable” qualifier presumably means that firmware engineers can customize this chip to their specific needs.

DLA
The deep learning accelerator is an open-source architecture NVIDIA has released to create accelerators for neural network inference. It’s really cool that NVIDIA has open-sourced this technology.

As with the PVA, the DLA appears to be programmable with Verilog, so that customers can adapt the firmware to meet their needs.

Most likely a goal of the DLA is to provide acceleration of lidar and other data that may not be optimized for the other vision-optimized chips on the Xavier.

That is a lot of processing power and specialization on one SoC!

Here’s NVIDIA CEO Jensen Huang touting the DRIVE AGX Xavier Developer Kit, which contains two Xavier SoCs.

Delivering Goods and People

Lots of self-driving companies are back to testing, in limited capacity, in the US. Right now, they’re typically testing delivering goods — not people — to vulnerable communities.

As an aside, Jewel Li from AutoX mentioned on a recent Autonocast episode that Chinese self-driving companies are totally back to normal, testing at full capacity, and working in the office.

But here in the US, lockdowns are still mostly in effect and self-driving companies are trying to both do the right thing and get back out on the road by becoming delivery services.

I imagine this plays especially well for self-driving companies that were founded from the start as delivery services, not robotaxis. First and foremost in that list is Nuro, which announced a partnership with CVS to deliver prescriptions.

Interestingly, “As with all our pilots, we will begin service with our autonomous Prius fleet to make deliveries, before introducing deliveries with R2, our custom-built delivery bot.”

I wonder what Nuro’s stages are, moving from a Prius with (presumably) a safety operator, to a driverless R2 (possibly with a safety operator trailing in another vehicle?), to a driverless R2 with no Nuro staff in the vicinity. I did a quick scan of Nuro’s blog and didn’t see anything, but I haven’t followed them closely on this particular issue.

On the other end of the spectrum, robotaxis face the challenge of providing a safe vehicular environment for many, many passengers to share (albeit at different times).

Early in the COVID crisis my old boss, Oliver Cameron, who is now co-founder and CEO of Voyage, tweeted:

Oliver is so good at Twitter. Things that normal people like me would spend days and even real dollars on, Oliver puts on Twitter and gets answers.

You can read in the Twitter thread that he got a lot of suggestions. We’ll see if any of them pan out. The immediate upshot seemed to be captured by this GIF he subsequently posted.

In the medium-term, a big question for robotaxi companies will be whether this becomes mandatory, or whether COVID diminishes as a real public health concern, leaving the world the way it was in mid-2019.

If COVID doesn’t go away soon, a lot of robotaxi companies might be tempted to become delivery companies.

Optimus Ride Delivering Meals at Paradise Valley Estates

Optimus Ride is testing its self-driving vehicles and delivering meals at Paradise Valley Estates, a retirement community in California. This information is fully over a year old but somehow it slipped past my radar.

It makes sense that a self-driving operation would test in a retirement community — there’s a natural geofence, the speed limits are low, and there is a built-in customer base.

But until now this particular niche seemed only to be approached by Voyage, who has made this central to their development strategy.

Voyage’s deployment at The Villages in Florida is on a vastly different scale than Paradise Valley Estates: 25,000 acres in Florida vs. 80 acres in California.

But from a business perspective, the characteristics are at least broadly similar.

I wonder if eventually bidding wars will break out to serve these communities, similar to what you might see with National Park concessionaires.

Embraer’s Fascinating Q1

My latest Forbes.com article reviews Embraer’s Q1 2020 results, which were buffeted by a failed sale of its commercial aviation division to Boeing, also the COVID-19 pandemic, delinquent customers, and surprising strength in its executive aviation and defense units.

The headline numbers were down: revenue decreased 24% compared to Q1 2019, and commercial aircraft deliveries declined 55% from the previous year. Nonetheless, Embraer heads into the rest of 2020 in a strong cash position, with $2.5 billion on its balance sheet, in large part due to strong Q4 2019 results.

Embraer is fascinating.

Indy Autonomous Challenge

The Indy Autonomous Challenge is a self-driving race series organized by the Indianapolis Motor Speedway, home of the Indy 500. The organizers have explicitly modeled the competition after the DARPA Challenges, which kicked off the self-driving car boom fifteen years ago.

The Pittsburgh Post-Gazette has a great article on the effort to get the challenge off the ground and involve university students.

The series also bears similarity to Roborace, the Formula E autonomous series that has run for the last few years.

There is so much to learn from these races!

Most of the day-to-day challenges of self-driving cars center around perception and planning. Those skills are less central (although still critical) to race car driving. On the track, control of the vehicle is key, especially when we push the machine to its limits.

There is a whole new set of skills to be learned at hundreds of miles an hour. Eventually, that research will make its way to street-legal autonomous vehicles.

SpaceX Automated Docking

Yesterday SpaceX launched a pair of astronauts into space. This was a huge deal, for reasons NASA captured on Twitter:

One small part of this effort was automated docking at the International Space Station. As The Verge explains:

“The vehicle is designed to autonomously approach the ISS and latch on to a standardized docking port, without any input from its human passengers…The predecessor to the capsule, SpaceX’s cargo Dragon, did not have this capability when it delivered supplies and food to the ISS. For all of those cargo missions, astronauts on board the ISS had to use the station’s robotic arm to grab hold of an approaching cargo Dragon and bring it onto a docking port. That technique is known as berthing, and it requires a lot of work from the astronauts on board the ISS. The Crew Dragon’s automated capabilities should help free up time for the astronauts to work on other things when new crews arrive.”

The SpaceX video that captures automated docking is a anticlimatic, compared to the rocket launch, but what’s going on behind the scenes is plenty impressive.

State estimation and control must have been huge challenges to make this work. On the ground — in automobiles, for example — gravity and the earth reduce the complexity of motion control from three dimensions down to only two dimensions. In a car you can go left or right, forward or backward, but you can’t go straight up or down.

In space — or in the air — that third dimension makes motion control much harder.

What’s also hard, and less obvious, is state estimation. This is sometimes called just “localization” in self-driving cars, because that’s really all there is to the problem (believe me, localization alone is hard enough). But in three dimensions it becomes a real challenge to keep track of your present state in three dimensions.

Hats off to SpaceX!

Don’t Sleep on Mobility as a Service

I read a few posts today by and about Bestmile, an under-the-radar Swiss autonomous vehicle company. Although Bestmile was born out of an autonomous vehicle demonstration, its focus now seems to be on “orchestrating” mobility through a combination of ridesharing, micro-transit, autonomous shuttles, and robotaxis.

This is probably the closest of any current company I’ve seen to a true “mobility as a service” platform.

I’ve never interacted with anyone from Bestmile or used the service, so it’s certainly possible the reality is a lot different than the vision. But the idea of “AWS but for mobility” is exciting.

Some of the ridesharing companies taken steps in this direction, by adding bicycles and scooters and mopeds and even helicopters to their apps. But I haven’t really had a seamless experience where I moved from one transportation mode to another.

More to the point, what will be really exciting is when entrepreneurs can use mobility-as-a-service networks to build their own businesses, the same way entrepreneurs (and now giant corporations) use cloud computing providers.

I don’t know that Bestmile will be the winning solution — it seems early and the commodity components don’t really exist yet, especially in the critical autonomy realm. I’m excited to watch this develop, though.

Level 3: Mercedes-Benz EQS Flagship Sedan

Mercedes-Benz recently launched an online video series called, “Meet Mercedes Digital.” This first episode featured CEO Ola Kallenius, who briefly teased the launch of the Mercedes-Benz EQS sedan in the second half of 2020.

“This is a special year for us. It’s the year where we launch of flagship car, the S-Class. That only comes around every so often…It’s happening in the second half of the year and we’re quite excited about it.”
Ola Kallenius

The EQS is a futuristic luxury vehicle that should be a big shot in the arm for Daimler, the parent company of Mercedes-Benz.

They could use it, too. Like most automotive companies, Daimler has been hit hard by COVID-19, with the stock price down nearly 50% this year.

The EQS will be all-electric, all-wheel drive, with a top speed of “> 200 km/h” (125mph).

Most exciting to me, the vehicle will feature Level 3 autonomy. Mercedes doesn’t dance around this term, either. Right in the middle of the vehicle overview, they state:

“The Vision EQS show car supports the driver with highly-automated driving at Level 3, e.g. on longer motorway journeys. Thanks to the modular sensor systems, the level of autonomy can be extended up to fully-automated driving in the future.”

Well, maybe they dance around it a little by writing about the “Vision EQS show car”, instead of the 2021 production EQS. But that is a bold and refreshing statement.

Given Audi’s recent step back from Level 3 technology, due to liability concerns, it will be interesting to see whether Level 3 will be available at launch this fall.

I’m excited to get behind the wheel and take my hands off.

NVIDIA DRIVE Labs

DRIVE Labs is a really nice series of lessons about NVIDIA’s deep learning approach to autonomous vehicle development. They have about twenty short videos, each accompanied by a longer blog post and dedicated to specific aspect of self-driving.

The videos are hosted by Neda Cvijetic, NVIDIA’s Sr. Manager of Autonomous Vehicles.

I particularly like this video on path prediction, which is an area of autonomous technology that really fascinates me.

NVIDIA is most famous for producing graphical processing units, which are useful for both video games and deep learning. As such, NVIDIA really specializes in applying neural networks to autonomous vehicle challenges.

One of the best developments around self-driving cars in the last few years is how open companies have become in sharing their technology, or at least the result of what their software can do. It’s a lot of fun to watch.

Test In The City Or In The Suburbs?

In Forbes.com today, I wrote about the trade-offs between testing autonomous vehicles in urban versus suburban environments.

Chinese startup WeRide recently shared that, by its measurements, testing in Guangzhou, China, is thirty times more efficient than testing in Silicon Valley.

“The comparison between Guangzhou and Silicon Valley is pertinent to other self-driving operations, which have to consider where to test. Many self-driving car companies, including Waymo, have focused their operations on relatively favorable geofenced locations, such as Phoenix, Las Vegas, and Silicon Valley. In these areas, a combination of sunny weather, wide streets, and good infrastructure helps the programs progress.”

Lots more in the full post.