PIX Moving

I was excited to read that PIX has just raised a “pre-Series A” round of funding.

PIX is an under-the-radar electric, autonomous vehicle manufacturer in the “small” city of Guiyang, China. I put “small” in quotes because Guiyang, while small relative to other Chinese metropolises, has a population of 4,000,000 people, which would make it the second-largest city in the United States!

Several years ago, I had the opportunity to travel to Guiyang and work with PIX on a self-driving car bootcamp that they jointly hosted with Udacity. Students from all over China flew in and spent a week building and programming a self-driving car. It was pretty awesome!

PIX has pioneered a process for large-scale 3D metal printing that allows them to build a wide variety of vehicle form factors on top of their foundational electric and autonomous “skateboard” platform.

It’s fun and exciting to watch little startups, especially in out-of-the-way places, grow and compete with industry leaders. I hope to see more great things from the team in Guiyang!

Instant Gratification

My latest Forbes.com article features a discussion with Yariv Bash, CEO of the Israeli drone delivery company Flytrex, about aerial technology, drone regulation, business models, and a “future of instant gratification.”

“Flytrex’s model is to utilize existing in-store fulfillment processes and then complete delivery with a drone. Store associates can prepare a drone delivery order for pick-up, just like any other type of pick-up order. Then a Flytrex team member will take the order from the store to a drone outside the store. 
The drone will fly the order to the customer’s house, hover, and lower the package to the customer on a wire.”

Motional Goes Driverless

Motional, the company formerly known as nuTonomy, announced today that it has begun driverless testing in Las Vegas.

Several years ago, the company was the first to offer self-driving rideshares, with a safety operator, to the general public, in partnership with Lyft. Lots people have used Lyft to fetch a self-driving robotaxi up and down the Las Vegas Strip. In most cases, however, the human safety operator took over driving responsibility in the most complex environments, such as hotel drop-off lanes.

Motional’s move to full driverless testing has been a step removed from the Lyft pilot, although both take place in Las Vegas. The driverless testing, though, occurs in the quieter, residential areas of the city, and does not yet involve passengers.

The driverless tests involve a safety “steward” onboard, in the passenger seat, who can stop the vehicle in an emergency. In this regard, Motional’s testing represents a kind of “intermediate” step between safety operators and a completely empty vehicle.

Another interesting aspect of the Motional test is their partnership with TUV SUD, a renowned European safety certification company. The details are vague, but TUV has conducted an audit of Motional’s safety practices and “supports” the current testing protocol.

As part of the announcement, Motional also highlighted plans to launch a public driverless service with Lyft in 2023.

“In 2023, Motional and Lyft will launch a scalable, fully-driverless, multimarket service — the largest agreement of its kind for a major ridesharing network, and a quantum leap forward for an already successful partnership.”

Mega-Charging In San Francisco

Cruise, which is emerging San Francisco’s hometown self-driving car company, just announced plans “to build one of the largest electric vehicle charging stations in North America” in a formerly industrial, now gentrifying area of the city known as Dogpatch.

This makes a lot of sense, given Cruise’s commitment to a 100% electric fleet, and its commitment to developing, testing, and launching its service in San Francisco.

There has long been some question as to whether robotaxis will journey far away from the city limits for charging and parking during off-hours. Initially, that may not make sense, since it would entail expanding the operational area of the vehicles. Placing the charging station in San Francisco is expensive from a real estate perspective, but potentially makes the technical challenge simpler, since Cruise already plans to offer service in the city.

The San Francisco Chronicle has more detail (gated, though).

pyplot

One of the tools I’ve been using a bunch recently at Voyage is pyplot, the charting library within the larger matplotlib visualization toolkit.

This surprised me a bit when I first go to Voyage, because most of my core motion control work is in C++, wherease pyplot is (perhaps obviously) a Python library.

But it turns out that switching over to Python for visualization can make a lot of sense, because much of the time our C++ code generates flat text log data. This data can be read just as easily (easier, really) by Python as C++. And matplotlib is just such a nice tool for quick visualizations, especially inside a Jupyter notebook.

It’s pretty neat to write a dozen or two lines of code and get a really intuitive display of what’s going on in the vehicle.

Maybe “really intuitive” is a stretch, but the plot above will be vaguely familiar to anyone who had to draw basic motion diagrams in high school physics.

The blue line is velocity, which first slopes upward from zero because the car is accelerating, and then slopes downward back to zero because the car is decelerating.

The green and purple lines represent the throttle and brake values, which of course explain why the car is accelerating in the first half of the plot and decelerating in the second have.

“Really intuitive”, right?

Cellular Versus Mesh

Ed Garsten just published a good Forbes.com article on the only topic (bizarrely) over which I have ever really seen self-driving car engineers get really angry at each other: DSRC (“mesh”) versus cellular networks for vehicle-to-vehicle communication.

“Score a big one for C-V2X which had previously won over Ford, which said in 2019 it would start installing the technology in its vehicles during calendar year 2022. But in Europe, Volkswagen AG, the world’s largest automaker, is already building DSRC-equipped vehicles setting the tone for the rest of the continent. In China, the world’s biggest automotive market, automakers have sided with C-V2X.

I am always amazed at how passionate engineers in this space are about this question. Still unsettled!

Literature Report: Radar That Sees Around Corners

Last year at the Computer Vision and Pattern Recognition (CVPR) conference, one of the premier academic conferences in the field, a team of researchers from Princeton and Ulm published a technique they developed to ricochet (“relay”) radar off of surfaces and around corners. This is a neat paper, and I have connections to both universities, so I saw this in a bunch of different places.

The research focuses on non-line-of-sight (NLOS) detection — detecting objects and agents that are hidden (“occluded”). People have been trying to do this for a while, with varying levels of success. There are videos on YouTube that seem to indicate Tesla Autopilot has some ability to do this on the highway, for example when an occluded vehicle two or three cars ahead hits the brakes suddenly. However, since Autopilot isn’t very transparent about its sensing and decision-making, it’s hard to reverse-engineer its specific capabilities.

The CVPR paper uses bounces radar waves off of various surfaces and uses the reflections to determine the position of NLOS (occluded) objects. The concept is roughly analogous to the mirrors that sometimes get put up to help drivers “see around” blind curves.

This approach seems simultaneously intuitive and really hard. Radar waves are already notoriously scattered and detection is already imprecise — trying to detect objects while also bouncing radar off an intermediate object is tricky. The three-part bounce (intermediate object — target object — intermediate object) requires a lot of energy. And filtering out the signal left by the intermediate object adds to the challenge.

How do they do it?

They use a combination of the Doppler effect and neural networks. The Doppler effect allows the radar to measure the velocity of objects. The system can segment objects based on their velocities, figuring out which objects are stationary (these will typically be visible intermediate objects) and which objects are in motion. Of course, this means that NLOS objects must have a different velocity than the relay objects.

The neural network is used in a pretty typical training and inference approach.

Some of the math in this paper stretches my knowledge of the physical properties of radar, but ultimately a lot of this seems to boil down to trigonometry:

Surfaces that are flat, relative to the wavelength λ of ≈ 5 mm for typical 76 GHz-81 GHz automotive radars, will result in a specular response. As a result, the transport function treats the relay wall as a mirror


The result of the math is a 4-dimensional representation of an NLOS object: x position, y position, velocity, and amplitude of the received radar wave.

The researchers used lidar to gather ground-truth data for the NLOS objects and draw bounding boxes. Then they trained a neural network to take the 4-dimensional NLOS radar encoding as input, and draw similar bounding boxes.

The paper states that their network incorporates both tracking and detection, although the tracking description is brief.

“..our approach leverages the multi-scale backbone and performs fusion at different levels. Specifically, we first perform separate input parameterization and high-level representation encoding for each frame..After the two stages of the pyramid network, we concatenate the n + 1 feature maps along the channel dimension for each stage..”

It seems like, for each frame, they store the output of the pyramid network, which is an intermediate result of the entire architecture. Then they can re-use that output for n successive frames, until there are enough new frames that it’s safe to throw away the old intermediate output.

The paper includes an “Assessments” section that compares the performance of this approach against single-shot detection (SSD) and PointPillars, two state of the art detectors for lidar point clouds. They find that their approach isn’t quite as strong, but is within a factor of 2–3, which is pretty impressive, given that they are working with reflected radar data, and not high-precision lidar data.

I’m particularly impressed that the team published a webpage with their training data and code. There’s also a neat video demo. Check it out!

Mobileye’s Big Bet On Radar

A radar scan, with side lobes, in the bottom right.

A few weeks ago, I wrote about the mapping deep-dive that Mobileye CEO Amnon Shashua presented at CES 2021.

That deep dive was one of two that Shashua included in his hour-long presentation. Today I’d like to write about the other deep dive —active sensors.

“Active sensors”, in the context of self-driving cars, typically means radar and lidar. These sensors are “active” in the sense that they emit signals (light pulses and waves) and then record what bounces back. By contract camera (and also audio, where applicable) are “passive” sensors, in that they merely record signals (light waves and sound waves) that already exist in the world.

Shashua pegs Mobileye’s active sensor work to the goal of producing mass-market self-driving cars by 2025. He hedges a bit and doesn’t call this quite “Level 5 autonomy”, but he’s clear that where he’s going.

To penetrate the mass-market, Shashua says Mobileye “wants to do two things: be better and be cheaper.” More specifically, Shashua shares that Mobileye is currently developing two standalone sensor subsystems: camera, and radar plus lidar. Ideally, each of these subsystems could drive the car all by itself.

By 2025, Shashua reveals that Mobileye wants to have three stand-alone subsystems: camera, radar, and lidar. This is the first time I can recall anybody talking serious about driving a car just with radar. If it were possible (that’s a big “if”), it would be a big deal.

Radar

Most of this deep dive is, in fact, about Mobileye’s efforts to dramatically improve radar performance.

“The radar revolution has much further to go and could be a standalone system.”

I don’t fully follow Shashua’s justification this radar effort. He says, “no matter what people tell you about how to reduce the cost of lidar, radar is 10x less expensive.”

Maybe. With the many companies entering the lidar field, a race to the bottom on prices seems plausible. But let’s grant the premise. Even though lidar might be 10x more expensive than radar, Shashua says that Mobileye still plans to build a standalone, lidar-only sensor subsystem. If lidar is so expensive, and radar is so inexpensive, and Mobileye can get radar to perform as well as lidar, then maybe Mobileye should just ditch lidar.

But they’re not ditching lidar, at least not yet.

In any case, sensor redundancy is great, and Mobileye is going to make the best radars the world has ever seen. In particular, they are going to focus on two major improvements: increasing resolution, and increasing the probability of detection.

Increasing resolution is a hardware problem. Mobileye is going to improve the current automotive radar state-of-the-art, which is to pack 12×16 trancievers in a sensor unit. Mobileye is working on 48×48 transceivers. Resolution scales exponentially with the number of transceivers, so this would be tremendous.

Increasing the probability of detection is a software problem. Shashua calls this “software-defined imaging by radar.” Unlike with the transceivers, the explanation here is vague. Mobileye is going to transform current radar scans, which result in diffuse “side lobes” around every detected object. Mobileye’s future radar will draw bounding boxes as tight as lidar does.

My best guess as to how they will do this is “mumble, mumble, neural networks.” Mobileye is very good at neural networks.

Lidar

At the end of the deep dive, Shashua spends a few minutes on lidar.

And for that few minutes, the business angles get more interesting than the technology. There’s been a lot of back and forth about Mobileye and Luminar. A few months ago, Luminar announced a big contract from Mobileye, and then shortly after that Mobileye announced the contract would be only short-term. Over the long-term, Mobileye is developing their own lidar.

At CES, Shashua says, “2022, we are all set with Luminar.” But for 2025, they need FMCW (frequency-modulated continuous wave) lidar. That’s what they’re going to build themselves.

FMCW is the same technology that radar uses. The Doppler shift in FMCW allows radar to detect velocity instantaneously (as opposed to camera and lidar, which needs to take at least two different observations, and then infer velocity by measuring the time and distance between those observations).

FMCW lidar will offer the same velocity benefit as FMCW radar. FMCW also uses lower energy signals, and possibly demonstrates better performance in weather like fog and sandstorms, where lidar currently underperforms.

As Shashua himself says in the presentation, the whole lidar industry is going to FMCW. So why does Mobileye need to build their own lidar?

Well, Shashua says, FMCW is hard.

But then we get to the real answer. Intel, which purchased Mobileye several years ago, is going to use Intel fabs to “put active and passive lidar elements on a chip.”

And that’s when I start to wonder if this Luminar deal really is only short-term.

Intel is struggling in a pretty public way, squeezed on different sides by TSMC, NVIDIA, and AMD. In 2021, the Mobileye CEO (and simultaneously Intel SVP) says they’re going to build their own lidar, basically because they’re owned by Intel.

Maybe Intel will turn out to be better and cheaper at lidar production than the five lidar startups that just went public in the past year. Or maybe Intel won’t be better or cheaper, but Mobileye will have to use Intel lidar anyway, because Intel owns them. Or maybe in a few years Mobileye will quietly extend a deal for Luminar FMCW lidar.

Many Apollo Projects

My latest Forbes.com article is an exploration of several different projects that Baidu’s Apollo team is advancing, including mobility-as-a-service, smart infrastructure, vehicle-to-everything communication, and infotainment.

The Mobility-as-a-Service program in Guangzhou steps beyond previous deployments, in that it pulls together different transportation modalities and use cases into a single service.

The service comprises 40 individual vehicles, of five different types:
* FAW Hongqi Robotaxis
* Apolong Shuttles
* King Long Robobuses
* Apollocop Public Safety Robots
* “New Species Vehicles”, Apollo’s term for a robot that can perform a range of functions, from vending snacks to sweeping and disinfecting the street

Read the whole thing!

Simulation At Aurora

Aurora just explained their simulation approach in detail, on their blog. In particular, they “apply procedural generation to simulation, we can create scenarios at the massive scale needed to rapidly develop and deploy the Aurora Driver.”

Interestingly, they have hired a team with Hollywood computer-generated imagery experience to automate the construction of simulation tests. They use an approach called “procedural generation”, which allows engineers to generate thousands of specific tests by specifying only a few general parameters for a scenario.

For example, Aurora engineers might ask for lots of tests involving highway merges in the rain, within a certain speed range. Their system would then generate thousands of permutations of that type of test, using a combination of mapping and behavioral data from the real world, and simulation-specific data.

It’s a really interesting read, and something Aurora believes in strongly. “The Aurora Driver performed 2.27 million unprotected left turns in simulation before even attempting one in the real world,” they reveal.

The timing of the blog post is interesting, coming right on the heels of the 2020 California Autonomous Vehicle Mileage and Disengagement Reports. Aurora’s numbers in those reports were really low — probably a function of the company’s focus on Pittsburgh and other areas for testing.

Nonetheless, a piece of the puzzle I’d love to see in Aurora’s blog post is a metric of how well simulation allows their vehicles to perform on the road. Ultimately, that should be the true measure of how effective a simulator is.