Mobileye’s Big Bet On Radar

A radar scan, with side lobes, in the bottom right.

A few weeks ago, I wrote about the mapping deep-dive that Mobileye CEO Amnon Shashua presented at CES 2021.

That deep dive was one of two that Shashua included in his hour-long presentation. Today I’d like to write about the other deep dive —active sensors.

“Active sensors”, in the context of self-driving cars, typically means radar and lidar. These sensors are “active” in the sense that they emit signals (light pulses and waves) and then record what bounces back. By contract camera (and also audio, where applicable) are “passive” sensors, in that they merely record signals (light waves and sound waves) that already exist in the world.

Shashua pegs Mobileye’s active sensor work to the goal of producing mass-market self-driving cars by 2025. He hedges a bit and doesn’t call this quite “Level 5 autonomy”, but he’s clear that where he’s going.

To penetrate the mass-market, Shashua says Mobileye “wants to do two things: be better and be cheaper.” More specifically, Shashua shares that Mobileye is currently developing two standalone sensor subsystems: camera, and radar plus lidar. Ideally, each of these subsystems could drive the car all by itself.

By 2025, Shashua reveals that Mobileye wants to have three stand-alone subsystems: camera, radar, and lidar. This is the first time I can recall anybody talking serious about driving a car just with radar. If it were possible (that’s a big “if”), it would be a big deal.

Radar

Most of this deep dive is, in fact, about Mobileye’s efforts to dramatically improve radar performance.

“The radar revolution has much further to go and could be a standalone system.”

I don’t fully follow Shashua’s justification this radar effort. He says, “no matter what people tell you about how to reduce the cost of lidar, radar is 10x less expensive.”

Maybe. With the many companies entering the lidar field, a race to the bottom on prices seems plausible. But let’s grant the premise. Even though lidar might be 10x more expensive than radar, Shashua says that Mobileye still plans to build a standalone, lidar-only sensor subsystem. If lidar is so expensive, and radar is so inexpensive, and Mobileye can get radar to perform as well as lidar, then maybe Mobileye should just ditch lidar.

But they’re not ditching lidar, at least not yet.

In any case, sensor redundancy is great, and Mobileye is going to make the best radars the world has ever seen. In particular, they are going to focus on two major improvements: increasing resolution, and increasing the probability of detection.

Increasing resolution is a hardware problem. Mobileye is going to improve the current automotive radar state-of-the-art, which is to pack 12×16 trancievers in a sensor unit. Mobileye is working on 48×48 transceivers. Resolution scales exponentially with the number of transceivers, so this would be tremendous.

Increasing the probability of detection is a software problem. Shashua calls this “software-defined imaging by radar.” Unlike with the transceivers, the explanation here is vague. Mobileye is going to transform current radar scans, which result in diffuse “side lobes” around every detected object. Mobileye’s future radar will draw bounding boxes as tight as lidar does.

My best guess as to how they will do this is “mumble, mumble, neural networks.” Mobileye is very good at neural networks.

Lidar

At the end of the deep dive, Shashua spends a few minutes on lidar.

And for that few minutes, the business angles get more interesting than the technology. There’s been a lot of back and forth about Mobileye and Luminar. A few months ago, Luminar announced a big contract from Mobileye, and then shortly after that Mobileye announced the contract would be only short-term. Over the long-term, Mobileye is developing their own lidar.

At CES, Shashua says, “2022, we are all set with Luminar.” But for 2025, they need FMCW (frequency-modulated continuous wave) lidar. That’s what they’re going to build themselves.

FMCW is the same technology that radar uses. The Doppler shift in FMCW allows radar to detect velocity instantaneously (as opposed to camera and lidar, which needs to take at least two different observations, and then infer velocity by measuring the time and distance between those observations).

FMCW lidar will offer the same velocity benefit as FMCW radar. FMCW also uses lower energy signals, and possibly demonstrates better performance in weather like fog and sandstorms, where lidar currently underperforms.

As Shashua himself says in the presentation, the whole lidar industry is going to FMCW. So why does Mobileye need to build their own lidar?

Well, Shashua says, FMCW is hard.

But then we get to the real answer. Intel, which purchased Mobileye several years ago, is going to use Intel fabs to “put active and passive lidar elements on a chip.”

And that’s when I start to wonder if this Luminar deal really is only short-term.

Intel is struggling in a pretty public way, squeezed on different sides by TSMC, NVIDIA, and AMD. In 2021, the Mobileye CEO (and simultaneously Intel SVP) says they’re going to build their own lidar, basically because they’re owned by Intel.

Maybe Intel will turn out to be better and cheaper at lidar production than the five lidar startups that just went public in the past year. Or maybe Intel won’t be better or cheaper, but Mobileye will have to use Intel lidar anyway, because Intel owns them. Or maybe in a few years Mobileye will quietly extend a deal for Luminar FMCW lidar.

Deep Dive on Mobileye REM Maps

Yesterday, I posted a brief overview of a couple of presentations Mobileye CEO Amnon Shashua gave at CES 2021 this month. I really enjoyed these presentations, in large part because over the years I’ve read less about Mobileye and know less about them than many other companies in the automotive technology ecosystem.

Today, I re-watched Shashua’s “deep dive” on Mobileye’s REM mapping approach. It’s quite informative, so I took notes.

  • REM is a Mobileye brand name that stands for Road Experience Management
  • The maps are generated from cameras. In the future, Mobileye’s lidar and radar will be designed to work with these camera-only maps, not the other way around.
  • In particular, even future lidar and radar systems will not use standard, point-cloud-based HD maps. Point clouds take up too much storage space to be practical, particularly for updating from a huge fleet of vehicles.
  • Instead of point clouds, REM uses “semantic” maps, that record sparse information, such as driveable paths, stop lines, and traffic signal locations.
  • Identifying this semantic segmentation and uploading it to the cloud takes 10 kb of data transfer per kilometer. This costs somebody (the manufacturer?) $1 per year, on average.
  • All of this begs a question, though — are maps even necessary?
  • In theory, maps aren’t necessary. After all, humans drive without maps (in many scenarios). Humans just figure out the road as we drive.
  • Artificial intelligence can do the same thing, but AI isn’t nearly as good as humans at this (yet). The Mean Time Between Failures (MTBF) for an AI will be low — lots of problems.
  • Solution: prepare a lot of this information in advance, and store it in the map.
  • Shashua says that everyone is using a map, even if they say they’re not. Pretty clear that this is a reference to Tesla.
  • Mobileye’s maps have three performance goals: Scale (consumer vs. robotaxi), Up-To-Dateness (real-time), Accuracy (cm-level)
  • Mobileye has a division which builds lidar-based HD Maps, so they know the pros and cons of this approach
  • Lidar-based HD maps are too detailed. The AI driver only need information for a 200m radius around the vehicle, but HD maps contain very detailed information about the entire world.
  • On the flip side, point clouds are just coordinates in space. AI needs semantic meaning: drivable paths, priority, crosswalks, stopping & yield lines.
  • Calculating this in real-time is theoretically possible, but practically impossible: too many conflicting signs and signals, too much noise, too much going on
  • Mobileye is now creating AV Map, which are not HD Maps: Scalability everywhere, Accuracy in 200m radius, Semantic features generated from wisdom of crowd
  • Map creation process: Harvesting -> Alignment -> Modeling & Semantics
  • In the photo above, only data marked by yellow lines in the photo is uploaded to the cloud. That’s the important information.
  • Mobileye extracts semantic meaning from the data and uses splines to represent driveable paths.
  • Currently, Mobileye maps 8M km of roads every day (6 countries). Unclear if this is 8M unique km, or the same 1km mapped by 8M vehicles every day.
  • By 2024, they’ll be mapping 1B km of roads every day (the whole planet).

Mobileye: Driving In Jerusalem

Mobileye, which now belongs to Intel, recently published a video of a 40-minute autonomous drive through urban Jerusalem. Mobileye is based in Jerusalem, and its streets may be among the most challenging that self-driving cars currently handle.

The recording clocks in at about 26 minutes, shorter than the actual drive, because parts of the drive are played back at 2x speed. I annotated the video with my thoughts.

[1:05] Mobileye’s sensing display is pretty sparse, but does a nice job focusing on the important items.

[1:05] I am a little surprised that Mobileye’s in-cabin display set-up is clearly aftermarket. I guess that shouldn’t surprise me, but it’s a reminder that Mobileye is a supplier, not an OEM.

[1:50] Merges from a standing start are tough. Nice work.

[3:10] Overtaking the parked truck is very safe and delibrate, but I could imagine eventually that will need to happen faster.

[4:25] The pedestrian is detected well before the crosswalk. Awesome.

[5:15] “We use environmental cues to determine that these cars are parked, and not just stuck in traffic.” I wonder what those cues are? The map?

[6:20] This drive is camera-only, but Mobileye says they are building an entirely separate, redundant sensing stack that is radar plus lidar. They want to achieve full self-driving with each subsystem independently. They call this “true redundancy.” Interesting choice to build two separate stacks, divided by modality, as opposed to equip the vehicle with two independent sensor suites with all modalities.

[8:15] Mobileye’s Road Experience Management technology pulls anonymized sensor data from BMW, VW, Nissan, and SAIC vehicles. Mobileye fuses this data into is mapping system, to keep its maps up to date around the world. This is part of the dream of ADAS — that you can get much more data from a production fleet of human drivers than a fleet of test vehicles. “We are basically leveraging Mobileye’s strong ADAS position to build and maintain a near-real-time HD map.” I wonder exactly what data they are pulling, and whether the manufacturers will agree to this in the long-run.

[10:17] This route includes some very narrow Jerusalem streets. This one, at least, is totally straight. I’m not sure this “proves” AVs can operate in places like India, but this is certainly a more challenging environment than, say, Phoenix.

[11:10] The unprotected left turn felt a little tense, but basically okay.

[12:20] Nice job detecting a pedestrian dragging a forklift. This scenario is reminiscent of the situation that led to Uber ATG’s fatal collision with Elaine Herzberg in Arizona. Mobileye seems to have no problem with this here.

[13:15] Really interesting and successful “negotiation” to merge around a stopped vehicle.

[15:00] The human driver takes over to pull over in a bus stop zone, so that the drone operator (riding shotgun) can land the drone and change the battery. I am surprised the human driver had to take over here. Compared to a lot of autonomous maneuvers in this video, “pull over,” seems pretty basic (and necessary).

[15:00] This stop highlights that the human driver does not seem to provide any input to the AV during the whole drive. The route appears pre-programmed from start to finish. I wonder how strong Mobileye’s in-vehicle UX is.

[15:30] This shot reminds me how impressive drone operators are. You take it for granted, but this drone operator is sitting in the passenger seat of a moving vehicle. He’s keeping a drone, which he can’t see, in place directly above the car, at hundreds of feet off the ground, for forty minutes!

[17:15] Roundabouts are tough for Americans. I’d pay a self-driving car to handle roundabouts for me.

[17:55] Even self-driving cars want to change to the next lane if it looks faster!

[19:05] Super-narrow street with lots of cars. I’d be nervous driving here. Impressive, especially for a camera-only system! Localization typically relies on lidar. Mobileye can clearly localize effectively with just cameras.

[19:35] A driver exits a parked car to wave the AV around. The AV doesn’t seem to “understand” the wave, but once the driver gets out of the way, it figures out to pass the parked car.

[21:27] That was a challenging unprotected left turn. I’m impressed again.

[22:45] Interesting that the AV does not yield to the moped (I think it’s a moped) in the crosswalk. The system seems to recognize the vehicle as a moped, but the moped is trying to use the crosswalk like a pedestrian. Tricky situation.

[24:20] The sensing UI seems to recognize a leading car quite far ahead — so far ahead that it doesn’t appear in the drone shot. I wonder what the system’s range is.

[25:25] Once again, the safety driver takes control to pull over and end the ride. I’m puzzled why that wasn’t pre-programmed, like the rest of the ride.

Overall, this was a lot of fun to watch and a really impressive performance by Mobileye. Jerusalem seems like a tough place to drive!

Mobileye Powers Forward

One of the key suppliers for autonomous vehicle manufacturers is the Israeli company Mobileye. Mobileye specializes in computer vision and produces the chips and software to help cars “see” the road.

The company has had to deal with significant bad press for the last six months, including the rumor that Tesla was looking to drop them and move to another supplier. Tesla later denied that rumor.

Mobileye just announced a big fourth-quarter 2015, though, and things appear to be on the upswing.

According to Tech Republic, here are the big takeaways:

1. Already known for its partnership with Tesla, Mobileye has signed up its third major automaker, Nissan, to add to existing partnerships with GM and VW.

2. Automakers are outsourcing the creation of vision-assistance tech instead of doing it themselves.

3. By teaming up with automakers, Mobileye is gathering information from cars on the road, which can allow it to “crowdsource” maps. This access to data gives it an edge over tech companies like Google, who have not yet announced partnerships with car companies.