pyplot

One of the tools I’ve been using a bunch recently at Voyage is pyplot, the charting library within the larger matplotlib visualization toolkit.

This surprised me a bit when I first go to Voyage, because most of my core motion control work is in C++, wherease pyplot is (perhaps obviously) a Python library.

But it turns out that switching over to Python for visualization can make a lot of sense, because much of the time our C++ code generates flat text log data. This data can be read just as easily (easier, really) by Python as C++. And matplotlib is just such a nice tool for quick visualizations, especially inside a Jupyter notebook.

It’s pretty neat to write a dozen or two lines of code and get a really intuitive display of what’s going on in the vehicle.

Maybe “really intuitive” is a stretch, but the plot above will be vaguely familiar to anyone who had to draw basic motion diagrams in high school physics.

The blue line is velocity, which first slopes upward from zero because the car is accelerating, and then slopes downward back to zero because the car is decelerating.

The green and purple lines represent the throttle and brake values, which of course explain why the car is accelerating in the first half of the plot and decelerating in the second have.

“Really intuitive”, right?

Cellular Versus Mesh

Ed Garsten just published a good Forbes.com article on the only topic (bizarrely) over which I have ever really seen self-driving car engineers get really angry at each other: DSRC (“mesh”) versus cellular networks for vehicle-to-vehicle communication.

“Score a big one for C-V2X which had previously won over Ford, which said in 2019 it would start installing the technology in its vehicles during calendar year 2022. But in Europe, Volkswagen AG, the world’s largest automaker, is already building DSRC-equipped vehicles setting the tone for the rest of the continent. In China, the world’s biggest automotive market, automakers have sided with C-V2X.

I am always amazed at how passionate engineers in this space are about this question. Still unsettled!

Electric Vehicle Wednesday: Scooter Battery Swapping Mystery

I liked this post on Twitter today:

Then I tried to track down the story, to learn the details, and found
nothing?

First I noticed that that tweet itself doesn’t link to a news story. That’s unusual, but maybe that’s more common in China. I’m not sure.

Then I searched Google (Baidu probably would be more helpful, but I don’t read or write Chinese) and found very little. SF Express does seem to be a Chinese logistics company, akin to Fedex or UPS in the US. But their English-language website doesn’t even feature a photo of a scooter, much less anything about battery swapping.

China Tower is a giant, state-owned electric utility. They do have an English-language website, although finding it takes a minute. The entity seems so huge that a battery-swap pilot seems like it would be small potatoes in the scheme of things, and their Media section hasn’t seen a press release in six months.

Google Search didn’t have much to show for this, but it did return a news story about a Chinese company called Immotor. Immotor even has a Crunchbase profile ($64.3M raised!) but the website URL redirects to a page that just has Chinese app links.

There are also a few stories, marginally better contextualized, about Yamaha running an e-scooter battery swap pilot in Australia.

Anyhow, the e-scooter battery swap idea seems pretty neat. If the battery really is the size of a lunch box, that would seem to make this much more viable than battery-swapping for passenger vehicles. Instead of a whole network of automated or semi-automated swapping stations, a la Better Place, the network could just host racks of batteries and let the rider handle the swapping.

I’d love to learn whether this type of system is real or not.

Update

Tayeb sent me a link to a (lengthy!) Chinese-language news story about the battery swapping program. The Google Translation of the story is awkward, but it appears that the program has been in place since 2019 and targets primarily food delivery drivers. There’s enough demand that stations running out of charged batteries is a big problem. In the future, they hope to expand the system to the general public.

Literature Report: Radar That Sees Around Corners

Last year at the Computer Vision and Pattern Recognition (CVPR) conference, one of the premier academic conferences in the field, a team of researchers from Princeton and Ulm published a technique they developed to ricochet (“relay”) radar off of surfaces and around corners. This is a neat paper, and I have connections to both universities, so I saw this in a bunch of different places.

The research focuses on non-line-of-sight (NLOS) detection — detecting objects and agents that are hidden (“occluded”). People have been trying to do this for a while, with varying levels of success. There are videos on YouTube that seem to indicate Tesla Autopilot has some ability to do this on the highway, for example when an occluded vehicle two or three cars ahead hits the brakes suddenly. However, since Autopilot isn’t very transparent about its sensing and decision-making, it’s hard to reverse-engineer its specific capabilities.

The CVPR paper uses bounces radar waves off of various surfaces and uses the reflections to determine the position of NLOS (occluded) objects. The concept is roughly analogous to the mirrors that sometimes get put up to help drivers “see around” blind curves.

This approach seems simultaneously intuitive and really hard. Radar waves are already notoriously scattered and detection is already imprecise — trying to detect objects while also bouncing radar off an intermediate object is tricky. The three-part bounce (intermediate object — target object — intermediate object) requires a lot of energy. And filtering out the signal left by the intermediate object adds to the challenge.

How do they do it?

They use a combination of the Doppler effect and neural networks. The Doppler effect allows the radar to measure the velocity of objects. The system can segment objects based on their velocities, figuring out which objects are stationary (these will typically be visible intermediate objects) and which objects are in motion. Of course, this means that NLOS objects must have a different velocity than the relay objects.

The neural network is used in a pretty typical training and inference approach.

Some of the math in this paper stretches my knowledge of the physical properties of radar, but ultimately a lot of this seems to boil down to trigonometry:

Surfaces that are flat, relative to the wavelength λ of ≈ 5 mm for typical 76 GHz-81 GHz automotive radars, will result in a specular response. As a result, the transport function treats the relay wall as a mirror


The result of the math is a 4-dimensional representation of an NLOS object: x position, y position, velocity, and amplitude of the received radar wave.

The researchers used lidar to gather ground-truth data for the NLOS objects and draw bounding boxes. Then they trained a neural network to take the 4-dimensional NLOS radar encoding as input, and draw similar bounding boxes.

The paper states that their network incorporates both tracking and detection, although the tracking description is brief.

“..our approach leverages the multi-scale backbone and performs fusion at different levels. Specifically, we first perform separate input parameterization and high-level representation encoding for each frame..After the two stages of the pyramid network, we concatenate the n + 1 feature maps along the channel dimension for each stage..”

It seems like, for each frame, they store the output of the pyramid network, which is an intermediate result of the entire architecture. Then they can re-use that output for n successive frames, until there are enough new frames that it’s safe to throw away the old intermediate output.

The paper includes an “Assessments” section that compares the performance of this approach against single-shot detection (SSD) and PointPillars, two state of the art detectors for lidar point clouds. They find that their approach isn’t quite as strong, but is within a factor of 2–3, which is pretty impressive, given that they are working with reflected radar data, and not high-precision lidar data.

I’m particularly impressed that the team published a webpage with their training data and code. There’s also a neat video demo. Check it out!

Mobileye’s Big Bet On Radar

A radar scan, with side lobes, in the bottom right.

A few weeks ago, I wrote about the mapping deep-dive that Mobileye CEO Amnon Shashua presented at CES 2021.

That deep dive was one of two that Shashua included in his hour-long presentation. Today I’d like to write about the other deep dive —active sensors.

“Active sensors”, in the context of self-driving cars, typically means radar and lidar. These sensors are “active” in the sense that they emit signals (light pulses and waves) and then record what bounces back. By contract camera (and also audio, where applicable) are “passive” sensors, in that they merely record signals (light waves and sound waves) that already exist in the world.

Shashua pegs Mobileye’s active sensor work to the goal of producing mass-market self-driving cars by 2025. He hedges a bit and doesn’t call this quite “Level 5 autonomy”, but he’s clear that where he’s going.

To penetrate the mass-market, Shashua says Mobileye “wants to do two things: be better and be cheaper.” More specifically, Shashua shares that Mobileye is currently developing two standalone sensor subsystems: camera, and radar plus lidar. Ideally, each of these subsystems could drive the car all by itself.

By 2025, Shashua reveals that Mobileye wants to have three stand-alone subsystems: camera, radar, and lidar. This is the first time I can recall anybody talking serious about driving a car just with radar. If it were possible (that’s a big “if”), it would be a big deal.

Radar

Most of this deep dive is, in fact, about Mobileye’s efforts to dramatically improve radar performance.

“The radar revolution has much further to go and could be a standalone system.”

I don’t fully follow Shashua’s justification this radar effort. He says, “no matter what people tell you about how to reduce the cost of lidar, radar is 10x less expensive.”

Maybe. With the many companies entering the lidar field, a race to the bottom on prices seems plausible. But let’s grant the premise. Even though lidar might be 10x more expensive than radar, Shashua says that Mobileye still plans to build a standalone, lidar-only sensor subsystem. If lidar is so expensive, and radar is so inexpensive, and Mobileye can get radar to perform as well as lidar, then maybe Mobileye should just ditch lidar.

But they’re not ditching lidar, at least not yet.

In any case, sensor redundancy is great, and Mobileye is going to make the best radars the world has ever seen. In particular, they are going to focus on two major improvements: increasing resolution, and increasing the probability of detection.

Increasing resolution is a hardware problem. Mobileye is going to improve the current automotive radar state-of-the-art, which is to pack 12×16 trancievers in a sensor unit. Mobileye is working on 48×48 transceivers. Resolution scales exponentially with the number of transceivers, so this would be tremendous.

Increasing the probability of detection is a software problem. Shashua calls this “software-defined imaging by radar.” Unlike with the transceivers, the explanation here is vague. Mobileye is going to transform current radar scans, which result in diffuse “side lobes” around every detected object. Mobileye’s future radar will draw bounding boxes as tight as lidar does.

My best guess as to how they will do this is “mumble, mumble, neural networks.” Mobileye is very good at neural networks.

Lidar

At the end of the deep dive, Shashua spends a few minutes on lidar.

And for that few minutes, the business angles get more interesting than the technology. There’s been a lot of back and forth about Mobileye and Luminar. A few months ago, Luminar announced a big contract from Mobileye, and then shortly after that Mobileye announced the contract would be only short-term. Over the long-term, Mobileye is developing their own lidar.

At CES, Shashua says, “2022, we are all set with Luminar.” But for 2025, they need FMCW (frequency-modulated continuous wave) lidar. That’s what they’re going to build themselves.

FMCW is the same technology that radar uses. The Doppler shift in FMCW allows radar to detect velocity instantaneously (as opposed to camera and lidar, which needs to take at least two different observations, and then infer velocity by measuring the time and distance between those observations).

FMCW lidar will offer the same velocity benefit as FMCW radar. FMCW also uses lower energy signals, and possibly demonstrates better performance in weather like fog and sandstorms, where lidar currently underperforms.

As Shashua himself says in the presentation, the whole lidar industry is going to FMCW. So why does Mobileye need to build their own lidar?

Well, Shashua says, FMCW is hard.

But then we get to the real answer. Intel, which purchased Mobileye several years ago, is going to use Intel fabs to “put active and passive lidar elements on a chip.”

And that’s when I start to wonder if this Luminar deal really is only short-term.

Intel is struggling in a pretty public way, squeezed on different sides by TSMC, NVIDIA, and AMD. In 2021, the Mobileye CEO (and simultaneously Intel SVP) says they’re going to build their own lidar, basically because they’re owned by Intel.

Maybe Intel will turn out to be better and cheaper at lidar production than the five lidar startups that just went public in the past year. Or maybe Intel won’t be better or cheaper, but Mobileye will have to use Intel lidar anyway, because Intel owns them. Or maybe in a few years Mobileye will quietly extend a deal for Luminar FMCW lidar.

Many Apollo Projects

My latest Forbes.com article is an exploration of several different projects that Baidu’s Apollo team is advancing, including mobility-as-a-service, smart infrastructure, vehicle-to-everything communication, and infotainment.

The Mobility-as-a-Service program in Guangzhou steps beyond previous deployments, in that it pulls together different transportation modalities and use cases into a single service.

The service comprises 40 individual vehicles, of five different types:
* FAW Hongqi Robotaxis
* Apolong Shuttles
* King Long Robobuses
* Apollocop Public Safety Robots
* “New Species Vehicles”, Apollo’s term for a robot that can perform a range of functions, from vending snacks to sweeping and disinfecting the street

Read the whole thing!

Simulation At Aurora

Aurora just explained their simulation approach in detail, on their blog. In particular, they “apply procedural generation to simulation, we can create scenarios at the massive scale needed to rapidly develop and deploy the Aurora Driver.”

Interestingly, they have hired a team with Hollywood computer-generated imagery experience to automate the construction of simulation tests. They use an approach called “procedural generation”, which allows engineers to generate thousands of specific tests by specifying only a few general parameters for a scenario.

For example, Aurora engineers might ask for lots of tests involving highway merges in the rain, within a certain speed range. Their system would then generate thousands of permutations of that type of test, using a combination of mapping and behavioral data from the real world, and simulation-specific data.

It’s a really interesting read, and something Aurora believes in strongly. “The Aurora Driver performed 2.27 million unprotected left turns in simulation before even attempting one in the real world,” they reveal.

The timing of the blog post is interesting, coming right on the heels of the 2020 California Autonomous Vehicle Mileage and Disengagement Reports. Aurora’s numbers in those reports were really low — probably a function of the company’s focus on Pittsburgh and other areas for testing.

Nonetheless, a piece of the puzzle I’d love to see in Aurora’s blog post is a metric of how well simulation allows their vehicles to perform on the road. Ultimately, that should be the true measure of how effective a simulator is.

Nobody Will Share An Alternative To Mileage & Disengagement Reports

Yesterday the California DMV published the 2020 Autonomous Vehicle Disengagement and Mileage Reports. The DMV grants permits to organizations that want to test autonomous vehicles on public roads in the state. Any organizations that do test on public roads must file reports about how many miles they drove, and how frequently their safety operators had to “disengage” the autonomous driving system in order to manually control the vehicle.

Headline numbers are that total autonomous miles summed from all companies actually decreased from 2019 to 2020, presumably due to the pandemic (also the summer wildfires). Cruise and Waymo recorded far and away the most miles, with Pony.ai a distant third, and then an asymptotic trend toward 0 miles.

The Miles Per Disengagement chart looks similar, although it includes a few surprises. For example, AutoX drove only ~41,000 autonomous miles in California during 2020, but they also only disengaged twice.

These numbers reflect only autonomous driving in California, on public roads, during 2020, which is a lot of caveats. That certainly explains why Waymo has so few miles. A few years ago they boasted of achieving 1 million autonomous miles per month, much of that in California. Now they’ve moved most of their driving miles to Arizona.

Perhaps the caveats also explain some of the big names who have major engineering teams in Silicon Valley but don’t appear in the report: Argo, Ford, Uber ATG (now Aurora, but how that merger is reflected here is unclear), and Baidu, for starters.

Tesla’s absence from the report is its own annual, recurring story. None of the standalone Class 8 trucking companies, like TuSimple, Embark, or Kodiak, appear on the list. I’m not sure if that’s because trucks go on a different list or they genuinely did 0 miles in California last year.


The reports themselves are only part of the story, though. For me, a fascinating angle is both how much attention everyone pays to the reports, and also how dismissive everyone is.

And yet, nobody seems willing to share any other numbers.

Waymo offers up their 48-page Safety Report as an alternative evaluation tool, and it is a great report, and it is more than any other company in the industry puts forward. But this report is entirely qualitative. There are no metrics in the report, and no real indication of how fast Waymo is progressing, or why they feel confident pulling safety operators from some streets in Arizona, but not others.

Other companies provide not even that much.

The big question, then, is what are the alternatives to these disengagement reports, will anybody be willing to share them, and will anybody demand to see them?

Electric Vehicle Monday: Utilization

A team of economists contends that electric vehicles travel about half as much as their internal combustion engine (ICE) counterparts, about 5000 miles for EVs compared to 10,000 for ICEs. The researchers speculate that this information supports the hypothesis that EVs and ICEs are complements, rather than substitutes.

That is, EVs may not take over the world, but multi-car households may choose to own both an EV and an ICE, and utilize them for different types of trips.

The effort that went into the study is impressive — the team linked data from the California utility PG&E with data from the California DMV, in order to figure out which households owned EVs, how much more electricity they purchased, and thus how many miles they likely drove. There seem to be some careful corrections, for example, the data accounts for solar panel ownership and the resulting drop in demand from the electrical grid.

The results of the study seem plausible and maybe even intuitive — the type of households that purchase EVs seem plausibly likely to also be low-mileage households, generally, and also multi-car households.

Perhaps because of that plausibility, I’m hesitant to conclude too much from the study, other than electrification is still relatively new and limited technology. Presumably, as electrification expands, the types of households that purchase EVs will come to more closely resemble the median household. At the same time, EV range seems to be ever-increasing.

For now, EVs are still largely a status and luxury good for consumers that can afford the cost and other limitations. But they seem to be moving steadily mass-market.

Big News Day For Ford

Ford reported Q4 earnings this afternoon, posting a $2.8 billion loss, or a $1.3 billion dollar gain, depending on whether how we count “special items.” That’s a $4.3 billion swing.

The bigger news seemed to be Ford’s 2021 outlook. CFO John Lawler estimated Ford would book an annual pre-tax profit of $8 billion to $9 billion dollars in the coming year. That would be a great year for Ford, and potentially its largest profit in 5 years. Although given that this past quarter’s swing due to “special items” was $4.3 billion, there’s a lot of variability here.

Ford also announced a big investment in electric autonomous vehicles. The headline number is $29 billion through 2025, of which $22 billion will go to electric vehicles and $7 billion will go to autonomous vehiicles. Various tweeters explain some of this headline number includes some expenditures from previous years, so it’s not clear how much of this is new money.

Meanwhile, The Wall Street Journal reports that Ford may under-perform expectations by a billion dollars or two, because of a global shortage of semiconductor chips. The shortage is already hitting both GM and Ford. Ford, in particular, is set to cut shifts at its F-150 plants in the coming weeks, due to the lack of chips. Since the F-150 is Ford’s profit engine, that’s expensive.