
Tesla has announced that Autopilot will increase its reliance on radar, promoting it to first-class status within the sensor suite of Tesla vehicles.
The radar was added to all Tesla vehicles in October 2014 as part of the Autopilot hardware suite, but was only meant to be a supplementary sensor to the primary camera and image processing system.
After careful consideration, we now believe it can be used as a primary control sensor without requiring the camera to confirm visual image recognition.
The blog post does not mention the fatal accident back in May that occurred while the car was on Autopilot, although it’s easy to speculate that the Autopilot update may be related to that specific accident.
When the car is approaching an overhead highway road sign positioned on a rise in the road or a bridge where the road dips underneath, this often looks like a collision course. The navigation data and height accuracy of the GPS are not enough to know whether the car will pass under the object or not. By the time the car is close and the road pitch changes, it is too late to brake.
That is basically the same scenario that caused the May accident.
Tesla does relay some interesting information about why they initially relied much more heavily on camera than on radar.
This is a non-trivial and counter-intuitive problem, because of how strange the world looks in radar. Photons of that wavelength travel easily through fog, dust, rain and snow, but anything metallic looks like a mirror. The radar can see people, but they appear partially translucent. Something made of wood or painted plastic, though opaque to a person, is almost as transparent as glass to radar.
The blog post also covers at least one scenario in which Tesla is uploading driving data from its users and using that to teach the fleet to drive better. And that’s notable in and of itself.
So, all around, a blog post worth reading.