One of the tools Iâve been using a bunch recently at Voyage is pyplot, the charting library within the larger matplotlib visualization toolkit.
This surprised me a bit when I first go to Voyage, because most of my core motion control work is in C++, wherease pyplot is (perhaps obviously) a Python library.
But it turns out that switching over to Python for visualization can make a lot of sense, because much of the time our C++ code generates flat text log data. This data can be read just as easily (easier, really) by Python as C++. And matplotlib is just such a nice tool for quick visualizations, especially inside a Jupyter notebook.
Itâs pretty neat to write a dozen or two lines of code and get a really intuitive display of whatâs going on in the vehicle.
Maybe âreally intuitiveâ is a stretch, but the plot above will be vaguely familiar to anyone who had to draw basic motion diagrams in high school physics.
The blue line is velocity, which first slopes upward from zero because the car is accelerating, and then slopes downward back to zero because the car is decelerating.
The green and purple lines represent the throttle and brake values, which of course explain why the car is accelerating in the first half of the plot and decelerating in the second have.
Ed Garsten just published a good Forbes.com article on the only topic (bizarrely) over which I have ever really seen self-driving car engineers get really angry at each other: DSRC (âmeshâ) versus cellular networks for vehicle-to-vehicle communication.
âScore a big one for C-V2X which had previously won over Ford, which said in 2019 it would start installing the technology in its vehicles during calendar year 2022. But in Europe, Volkswagen AG, the worldâs largest automaker, is already building DSRC-equipped vehicles setting the tone for the rest of the continent. In China, the worldâs biggest automotive market, automakers have sided with C-V2X.
I am always amazed at how passionate engineers in this space are about this question. Still unsettled!
#China 2nd largest courier, SF Express, to buy 30K e-bike frames by 2020 end as it starts using China Tower battery swap service. SF to stop buying bikes with batteries, local media reports. Currently, CN Tower has >10K #battery cabinets in ca 100 cities that riders use for swap. pic.twitter.com/nIq5DCRN6A
Then I tried to track down the story, to learn the details, and foundâŠnothing?
First I noticed that that tweet itself doesnât link to a news story. Thatâs unusual, but maybe thatâs more common in China. Iâm not sure.
Then I searched Google (Baidu probably would be more helpful, but I donât read or write Chinese) and found very little. SF Express does seem to be a Chinese logistics company, akin to Fedex or UPS in the US. But their English-language website doesnât even feature a photo of a scooter, much less anything about battery swapping.
China Tower is a giant, state-owned electric utility. They do have an English-language website, although finding it takes a minute. The entity seems so huge that a battery-swap pilot seems like it would be small potatoes in the scheme of things, and their Media section hasnât seen a press release in six months.
Google Search didnât have much to show for this, but it did return a news story about a Chinese company called Immotor. Immotor even has a Crunchbase profile ($64.3M raised!) but the website URL redirects to a page that just has Chinese app links.
There are also a fewstories, marginally better contextualized, about Yamaha running an e-scooter battery swap pilot in Australia.
Anyhow, the e-scooter battery swap idea seems pretty neat. If the battery really is the size of a lunch box, that would seem to make this much more viable than battery-swapping for passenger vehicles. Instead of a whole network of automated or semi-automated swapping stations, a la Better Place, the network could just host racks of batteries and let the rider handle the swapping.
Iâd love to learn whether this type of system is real or not.
Update
Tayeb sent me a link to a (lengthy!) Chinese-language news story about the battery swapping program. The Google Translation of the story is awkward, but it appears that the program has been in place since 2019 and targets primarily food delivery drivers. Thereâs enough demand that stations running out of charged batteries is a big problem. In the future, they hope to expand the system to the general public.
Last year at the Computer Vision and Pattern Recognition (CVPR) conference, one of the premier academic conferences in the field, a team of researchers from Princeton and Ulm published a technique they developed to ricochet (ârelayâ) radar off of surfaces and around corners. This is a neat paper, and I have connections to both universities, so I saw this in a bunch of different places.
The research focuses on non-line-of-sight (NLOS) detectionâââdetecting objects and agents that are hidden (âoccludedâ). People have been trying to do this for a while, with varying levels of success. There are videos on YouTube that seem to indicate Tesla Autopilot has some ability to do this on the highway, for example when an occluded vehicle two or three cars ahead hits the brakes suddenly. However, since Autopilot isnât very transparent about its sensing and decision-making, itâs hard to reverse-engineer its specific capabilities.
The CVPR paper uses bounces radar waves off of various surfaces and uses the reflections to determine the position of NLOS (occluded) objects. The concept is roughly analogous to the mirrors that sometimes get put up to help drivers âsee aroundâ blind curves.
This approach seems simultaneously intuitive and really hard. Radar waves are already notoriously scattered and detection is already impreciseâââtrying to detect objects while also bouncing radar off an intermediate object is tricky. The three-part bounce (intermediate objectâââtarget objectâââintermediate object) requires a lot of energy. And filtering out the signal left by the intermediate object adds to the challenge.
How do they do it?
They use a combination of the Doppler effect and neural networks. The Doppler effect allows the radar to measure the velocity of objects. The system can segment objects based on their velocities, figuring out which objects are stationary (these will typically be visible intermediate objects) and which objects are in motion. Of course, this means that NLOS objects must have a different velocity than the relay objects.
The neural network is used in a pretty typical training and inference approach.
Some of the math in this paper stretches my knowledge of the physical properties of radar, but ultimately a lot of this seems to boil down to trigonometry:
Surfaces that are flat, relative to the wavelength λ of â 5 mm for typical 76 GHz-81 GHz automotive radars, will result in a specular response. As a result, the transport function treats the relay wall as a mirrorâŠ
The result of the math is a 4-dimensional representation of an NLOS object: x position, y position, velocity, and amplitude of the received radar wave.
The researchers used lidar to gather ground-truth data for the NLOS objects and draw bounding boxes. Then they trained a neural network to take the 4-dimensional NLOS radar encoding as input, and draw similar bounding boxes.
The paper states that their network incorporates both tracking and detection, although the tracking description is brief.
â..our approach leverages the multi-scale backbone and performs fusion at different levels. Specifically, we first perform separate input parameterization and high-level representation encoding for each frame..After the two stages of the pyramid network, we concatenate the n + 1 feature maps along the channel dimension for each stage..â
It seems like, for each frame, they store the output of the pyramid network, which is an intermediate result of the entire architecture. Then they can re-use that output for n successive frames, until there are enough new frames that itâs safe to throw away the old intermediate output.
The paper includes an âAssessmentsâ section that compares the performance of this approach against single-shot detection (SSD) and PointPillars, two state of the art detectors for lidar point clouds. They find that their approach isnât quite as strong, but is within a factor of 2â3, which is pretty impressive, given that they are working with reflected radar data, and not high-precision lidar data.
Iâm particularly impressed that the team published a webpage with their training data and code. Thereâs also a neat video demo. Check it out!
A radar scan, with side lobes, in the bottom right.
A few weeks ago, I wrote about the mapping deep-dive that Mobileye CEO Amnon Shashua presented at CES 2021.
That deep dive was one of two that Shashua included in his hour-long presentation. Today Iâd like to write about the other deep dive âactive sensors.
âActive sensorsâ, in the context of self-driving cars, typically means radar and lidar. These sensors are âactiveâ in the sense that they emit signals (light pulses and waves) and then record what bounces back. By contract camera (and also audio, where applicable) are âpassiveâ sensors, in that they merely record signals (light waves and sound waves) that already exist in the world.
Shashua pegs Mobileyeâs active sensor work to the goal of producing mass-market self-driving cars by 2025. He hedges a bit and doesnât call this quite âLevel 5 autonomyâ, but heâs clear that where heâs going.
To penetrate the mass-market, Shashua says Mobileye âwants to do two things: be better and be cheaper.â More specifically, Shashua shares that Mobileye is currently developing two standalone sensor subsystems: camera, and radar plus lidar. Ideally, each of these subsystems could drive the car all by itself.
By 2025, Shashua reveals that Mobileye wants to have three stand-alone subsystems: camera, radar, and lidar. This is the first time I can recall anybody talking serious about driving a car just with radar. If it were possible (thatâs a big âifâ), it would be a big deal.
Radar
Most of this deep dive is, in fact, about Mobileyeâs efforts to dramatically improve radar performance.
âThe radar revolution has much further to go and could be a standalone system.â
I donât fully follow Shashuaâs justification this radar effort. He says, âno matter what people tell you about how to reduce the cost of lidar, radar is 10x less expensive.â
Maybe. With the many companies entering the lidar field, a race to the bottom on prices seems plausible. But letâs grant the premise. Even though lidar might be 10x more expensive than radar, Shashua says that Mobileye still plans to build a standalone, lidar-only sensor subsystem. If lidar is so expensive, and radar is so inexpensive, and Mobileye can get radar to perform as well as lidar, then maybe Mobileye should just ditch lidar.
But theyâre not ditching lidar, at least not yet.
In any case, sensor redundancy is great, and Mobileye is going to make the best radars the world has ever seen. In particular, they are going to focus on two major improvements: increasing resolution, and increasing the probability of detection.
Increasing resolution is a hardware problem. Mobileye is going to improve the current automotive radar state-of-the-art, which is to pack 12×16 trancievers in a sensor unit. Mobileye is working on 48×48 transceivers. Resolution scales exponentially with the number of transceivers, so this would be tremendous.
Increasing the probability of detection is a software problem. Shashua calls this âsoftware-defined imaging by radar.â Unlike with the transceivers, the explanation here is vague. Mobileye is going to transform current radar scans, which result in diffuse âside lobesâ around every detected object. Mobileyeâs future radar will draw bounding boxes as tight as lidar does.
My best guess as to how they will do this is âmumble, mumble, neural networks.â Mobileye is very good at neural networks.
Lidar
At the end of the deep dive, Shashua spends a few minutes on lidar.
And for that few minutes, the business angles get more interesting than the technology. Thereâs been a lot of back and forth about Mobileye and Luminar. A few months ago, Luminar announced a big contract from Mobileye, and then shortly after that Mobileye announced the contract would be only short-term. Over the long-term, Mobileye is developing their own lidar.
At CES, Shashua says, â2022, we are all set with Luminar.â But for 2025, they need FMCW (frequency-modulated continuous wave) lidar. Thatâs what theyâre going to build themselves.
FMCW is the same technology that radar uses. The Doppler shift in FMCW allows radar to detect velocity instantaneously (as opposed to camera and lidar, which needs to take at least two different observations, and then infer velocity by measuring the time and distance between those observations).
FMCW lidar will offer the same velocity benefit as FMCW radar. FMCW also uses lower energy signals, and possibly demonstrates better performance in weather like fog and sandstorms, where lidar currently underperforms.
As Shashua himself says in the presentation, the whole lidar industry is going to FMCW. So why does Mobileye need to build their own lidar?
Well, Shashua says, FMCW is hard.
But then we get to the real answer. Intel, which purchased Mobileye several years ago, is going to use Intel fabs to âput active and passive lidar elements on a chip.â
And thatâs when I start to wonder if this Luminar deal really is only short-term.
Intel is struggling in a pretty public way, squeezed on different sides by TSMC, NVIDIA, and AMD. In 2021, the Mobileye CEO (and simultaneously Intel SVP) says theyâre going to build their own lidar, basically because theyâre owned by Intel.
Maybe Intel will turn out to be better and cheaper at lidar production than the five lidar startups that just went public in the past year. Or maybe Intel wonât be better or cheaper, but Mobileye will have to use Intel lidar anyway, because Intel owns them. Or maybe in a few years Mobileye will quietly extend a deal for Luminar FMCW lidar.
The Mobility-as-a-Service program in Guangzhou steps beyond previous deployments, in that it pulls together different transportation modalities and use cases into a single service.
The service comprises 40 individual vehicles, of five different types: * FAW Hongqi Robotaxis * Apolong Shuttles * King Long Robobuses * Apollocop Public Safety Robots * âNew Species Vehiclesâ, Apolloâs term for a robot that can perform a range of functions, from vending snacks to sweeping and disinfecting the street
Aurora just explained their simulation approach in detail, on their blog. In particular, they âapply procedural generation to simulation, we can create scenarios at the massive scale needed to rapidly develop and deploy the Aurora Driver.â
Interestingly, they have hired a team with Hollywood computer-generated imagery experience to automate the construction of simulation tests. They use an approach called âprocedural generationâ, which allows engineers to generate thousands of specific tests by specifying only a few general parameters for a scenario.
For example, Aurora engineers might ask for lots of tests involving highway merges in the rain, within a certain speed range. Their system would then generate thousands of permutations of that type of test, using a combination of mapping and behavioral data from the real world, and simulation-specific data.
Itâs a really interesting read, and something Aurora believes in strongly. âThe Aurora Driver performed 2.27 million unprotected left turns in simulation before even attempting one in the real world,â they reveal.
The timing of the blog post is interesting, coming right on the heels of the 2020 California Autonomous Vehicle Mileage and Disengagement Reports. Auroraâs numbers in those reports were really lowâââprobably a function of the companyâs focus on Pittsburgh and other areas for testing.
Nonetheless, a piece of the puzzle Iâd love to see in Auroraâs blog post is a metric of how well simulation allows their vehicles to perform on the road. Ultimately, that should be the true measure of how effective a simulator is.
Yesterday the California DMV published the 2020 Autonomous Vehicle Disengagement and Mileage Reports. The DMV grants permits to organizations that want to test autonomous vehicles on public roads in the state. Any organizations that do test on public roads must file reports about how many miles they drove, and how frequently their safety operators had to âdisengageâ the autonomous driving system in order to manually control the vehicle.
Headline numbers are that total autonomous miles summed from all companies actually decreased from 2019 to 2020, presumably due to the pandemic (also the summer wildfires). Cruise and Waymo recorded far and away the most miles, with Pony.ai a distant third, and then an asymptotic trend toward 0 miles.
The Miles Per Disengagement chart looks similar, although it includes a few surprises. For example, AutoX drove only ~41,000 autonomous miles in California during 2020, but they also only disengaged twice.
These numbers reflect only autonomous driving in California, on public roads, during 2020, which is a lot of caveats. That certainly explains why Waymo has so few miles. A few years ago they boasted of achieving 1 million autonomous miles per month, much of that in California. Now theyâve moved most of their driving miles to Arizona.
Perhaps the caveats also explain some of the big names who have major engineering teams in Silicon Valley but donât appear in the report: Argo, Ford, Uber ATG (now Aurora, but how that merger is reflected here is unclear), and Baidu, for starters.
Teslaâs absence from the report is its own annual, recurring story. None of the standalone Class 8 trucking companies, like TuSimple, Embark, or Kodiak, appear on the list. Iâm not sure if thatâs because trucks go on a different list or they genuinely did 0 miles in California last year.
The reports themselves are only part of the story, though. For me, a fascinating angle is both how much attention everyone pays to the reports, and also how dismissive everyone is.
1/5 Today, CA DMV released its disengagement report. As weâve said before, we appreciate what itâs trying to do w/ the report, but the metrics provide limited value in assessing the capabilities of the Waymo Driver, or in distinguishing its performance from other A/V companies.
I've noticed in the past, evidence that suggests other companies (not Cruise) have changed how they define a disengagement. This makes me skeptical of these reports. I'm not saying one can't glean some insight, but I would take them more seriously if there was a standard.
No change – disengagement rate is not a sign of readiness by itself. But when measured in a consistent way, I think a 10x improvement in a year is real progress and worth celebrating. Especially since humans got worse in 2020.https://t.co/j6PXo7xuf2
And yet, nobody seems willing to share any other numbers.
Waymo offers up their 48-page Safety Report as an alternative evaluation tool, and it is a great report, and it is more than any other company in the industry puts forward. But this report is entirely qualitative. There are no metrics in the report, and no real indication of how fast Waymo is progressing, or why they feel confident pulling safety operators from some streets in Arizona, but not others.
Other companies provide not even that much.
The big question, then, is what are the alternatives to these disengagement reports, will anybody be willing to share them, and will anybody demand to see them?
That is, EVs may not take over the world, but multi-car households may choose to own both an EV and an ICE, and utilize them for different types of trips.
The effort that went into the study is impressiveâââthe team linked data from the California utility PG&E with data from the California DMV, in order to figure out which households owned EVs, how much more electricity they purchased, and thus how many miles they likely drove. There seem to be some careful corrections, for example, the data accounts for solar panel ownership and the resulting drop in demand from the electrical grid.
The results of the study seem plausible and maybe even intuitiveâââthe type of households that purchase EVs seem plausibly likely to also be low-mileage households, generally, and also multi-car households.
Perhaps because of that plausibility, Iâm hesitant to conclude too much from the study, other than electrification is still relatively new and limited technology. Presumably, as electrification expands, the types of households that purchase EVs will come to more closely resemble the median household. At the same time, EV range seems to be ever-increasing.
For now, EVs are still largely a status and luxury good for consumers that can afford the cost and other limitations. But they seem to be moving steadily mass-market.
Ford reported Q4 earnings this afternoon, posting a $2.8 billion loss, or a $1.3 billion dollar gain, depending on whether how we count âspecial items.â Thatâs a $4.3 billion swing.
The bigger news seemed to be Fordâs 2021 outlook. CFO John Lawler estimated Ford would book an annual pre-tax profit of $8 billion to $9 billion dollars in the coming year. That would be a great year for Ford, and potentially its largest profit in 5 years. Although given that this past quarterâs swing due to âspecial itemsâ was $4.3 billion, thereâs a lot of variability here.
Ford also announced a big investment in electric autonomous vehicles. The headline number is $29 billion through 2025, of which $22 billion will go to electric vehicles and $7 billion will go to autonomous vehiicles. Various tweeters explain some of this headline number includes some expenditures from previous years, so itâs not clear how much of this is new money.
Meanwhile, The Wall Street Journal reports that Ford may under-perform expectations by a billion dollars or two, because of a global shortage of semiconductor chips. The shortage is already hitting both GM and Ford. Ford, in particular, is set to cut shifts at its F-150 plants in the coming weeks, due to the lack of chips. Since the F-150 is Fordâs profit engine, thatâs expensive.