The Six NVIDIA Xavier Processors

NVIDIA’s Xavier system on a chip (SoC) for self-driving cars recently passed TÜV ISO 26262 functional safety testing. Reading NVIDIA’s blog post on this achievement, I was struck by just how many specialized processors Xavier has, many of which were new to me.

Also, did you know there exists a site called Wikichip?

GPU
Of course an NVIDIA SoC will have a GPU, in this case a Volta GPU. The Volta GPU on the Xavier is optimized for inference. That means the neural network is probably going to be trained somewhere else and then loaded onto this platform when it’s ready for production deployment.

Wikichip lists this GPU at 22.6 tera-operations per second (TOPS). For comparison, Tesla Motor’s purpose-built self-driving chip boasts 36 TOPS. I confess I don’t know enough about just how far to the redline these chips go to understand whether 23 TOPS vs. 36 TOPS is basically the same thing or wildly different.

CPU
Although NVIDIA is a GPU company, the Xavier has a CPU. The CPU has 8 Carmel cores. I assume it’s fast.

VPU
Xavier includes a vision processing unit (VPU), which makes sense for a SoC designed for lots of cameras.

NVIDIA sometimes calls this a “Stereo/Optical Flow accelerator.” Optical flow is a machine learning technique for inferring data (distance, velocity) from stereo cameras. I assume more generally the goal is to accelerate machine learning algorithms on sequential frames of video.

ISP
I had not before heard of image signal processors. Like a VPU, an ISP is designed to accelerate the performance of algorithms on camera data. ISPs seem to focus on individual high-resolution frames, probably for classification tasks on things like signs.

PVA
Vision is clearly a strength of the Xavier. The programmable vision accelerator is an NVIDIA proprietary technology. The best documentation I could find is a patent that seems to focus on collapsing multiple loops into a single loop in order to accelerate vision calculations.

The “programmable” qualifier presumably means that firmware engineers can customize this chip to their specific needs.

DLA
The deep learning accelerator is an open-source architecture NVIDIA has released to create accelerators for neural network inference. It’s really cool that NVIDIA has open-sourced this technology.

As with the PVA, the DLA appears to be programmable with Verilog, so that customers can adapt the firmware to meet their needs.

Most likely a goal of the DLA is to provide acceleration of lidar and other data that may not be optimized for the other vision-optimized chips on the Xavier.

That is a lot of processing power and specialization on one SoC!

Here’s NVIDIA CEO Jensen Huang touting the DRIVE AGX Xavier Developer Kit, which contains two Xavier SoCs.

NVIDIA DRIVE Labs

DRIVE Labs is a really nice series of lessons about NVIDIA’s deep learning approach to autonomous vehicle development. They have about twenty short videos, each accompanied by a longer blog post and dedicated to specific aspect of self-driving.

The videos are hosted by Neda Cvijetic, NVIDIA’s Sr. Manager of Autonomous Vehicles.

I particularly like this video on path prediction, which is an area of autonomous technology that really fascinates me.

NVIDIA is most famous for producing graphical processing units, which are useful for both video games and deep learning. As such, NVIDIA really specializes in applying neural networks to autonomous vehicle challenges.

One of the best developments around self-driving cars in the last few years is how open companies have become in sharing their technology, or at least the result of what their software can do. It’s a lot of fun to watch.

Udacity at NVIDIA GTC

Udacity will be at NVIDIA’s GPU Technology Conference next week in San Jose!

If you’ll be there, please stop by to say hello. There will be a car display, plus instructors and students talking about the Self-Driving Car Nanodegree Program.

Also, I’ll be presenting at 4:30pm.

There are still tickets left to the conference if you’d like to register! If you’re a Udacity student, email me (david.silver@udacity.com) for the student discount code.

GPUs Are Eating the World

Our partners at NVIDIA just announced an amazing third-quarter, which cycled (see what I did there?) their stock price up 30%.

The bulk of NVIDIA’s present growth is in their bread and butter gaming business, where they sold $1.24 billion worth of GPUs in just the third quarter.

Headlines then mention NVIDIA’s datacenter business, where they sell GPUs to companies like Google and Facebook, which use the GPUs not for gaming, but rather for high-powered deep learning.

GPUs employ massive parallelism to stream games to computer monitors. One way to think of it is that every pixel on a monitor is doing pretty much the same thing, just with different inputs, which is how the colors change.

That massive parallelism turns out to be equally helpful for deep neural networks, in which every unit in the network is doing pretty much the same thing, just with different inputs.

The third and fastest-growing unit of NVIDIA’s business is automotive, which grew 61% year-over-year. Every automotive company in the world is pulling NVIDIA chips, particularly the DRIVE PX2, into their autonomous vehicles. These chips enable deep learning and other parallelized computations that help the car process data in real-time.

It’s a good time to be making GPUs.

Autonomous Vehicle Round-up

  1. NVIDIA is using their Gran Turismo game engine to power autonomous vehicle simulations.
  2. NVIDIA also beat their most recent revenue forecasts, thanks partly to autonomous vehicle demand.
  3. A UK startup called Immense Solutions is working on intelligence for autonomous vehicle fleets.
  4. UK-based Transportation Research Lab is launching a test environment in Greenwhich.

All in all, it’s a good time for self-driving car enthusiasts at NVIDIA or in the UK.


Originally published at www.davidincalifornia.com on February 18, 2016.

NVIDIA Jetson TX1

NVIDIA recently announced the new Jetson TX1 unit.

They bill it as “a supercomputer on a module that’s the size of a credit card”.

NVIDIA is targeting the unit principally at autonomous vehicles, and also medical imaging, which presumably tackles a lot of similar computer vision issues.

The last few years have seen a deceleration in the mobile phone market, as phone manufacturers and app developers have had a harder time figuring out how to improve the smartphone.

I think we will see the converse in the autonomous vehicle market, and the Jetson TX1 is an example of that. In the robotics market, there is a lot more room for improvement, and a greater number of currently-binding technological constraints that can be relaxed.

As a side note, I always waffle on how to spell in NVIDIA, which can appear in the press as “NVIDIA”, “Nvidia”, “nVidia”, or “nVIDIA”. Since NVIDIA’s own website seems to be leaning toward the “NVIDIA” styling, I’ll go with that.


Originally published at www.davidincalifornia.com on November 11, 2015.