
Yesterday’s post on the Tesla Autopilot crash generated a lot of comments, in large part because the Medium staff posted it to their homepage.
And the comments were really quite brilliant! They highlighted points that I either did not think of or that I didn’t explain clearly.
Flying and Boating
Diane Torrance and Jay Conne compared Tesla Autopilot their experiences are airplane pilots and boat captains, respectively. Those are transportation modes in which some version of autopilot is used extensively, and I expect there is a lot for us to learn by studying what people have learned in those fields.
In fact, the podcast 99% Invisible did podcast on the risks and rewards of airplane autopilot last year. The podcast is called “Children of the Magenta”. The reference is to the magenta line that airplane pilots track while on autopilot.
Fault
Clifford Goudey made a good point about fault and liability:
The truck driver needed to wait for a better opportunity to make the left turn and is burdened to yield to oncoming traffic.
However, Duff Bailey countered with a point about reality on the road:
a left turning driver must yield to all traffic before crossing — which is an actual impossibility under most normal traffic conditions. To prevent gridlock — human piloted cars and trucks play a subtle game of chicken with the oncoming traffic — depending on their survival instinct to slow down and not crash. When the other vehicle’s pilot system is suicidal, asleep or not working (as appears to be the case here) a crash WILL result.
Jay Coone added:
In some highway situations, the speed-limit may permit a car to become visible too late for a slow turning, large truck to see the car in time to not make their turn.
Driver Engagement
Brad Slusher lamented:
Unfortunately we will never know if the driver was infact actively engaged in monitoring his environment
I think that is true, in this case. But I also think the day is coming when cars have in-cabin cameras, to monitor drive attentiveness.
Sensors vs Software
Marcus Morgenthal took exception to the idea that design flaws in Tesla’s software or sensors mitigated the driver’s culpability for the accident. And I agree.
But I would also like to clarify my previous comments on how this accident might be due to failures of sensors or software or both.
Tesla’s initial comments indicated that the failure might have been due to software, which is a much easier fix. After all, Tesla has gotten quite good at patching their software over-the-air, through customer’s home WiFi networks.
A sensor failure would be more problematic, as it might require adding an additional camera or radar to every single Tesla using Autopilot. That, in turn, might require a particularly expensive mass recall.
Tesla has done the right thing with recalls in the past, and I’m confident they would do the right thing here. But it’s worth keeping the costs and incentives in mind.
Thank You
A number of commenters wrote just to thank me for a helpful article, which was really kind.
And I say thank you back!
I really enjoy writing about self-driving cars and it’s fantastic that you enjoy reading about them!
In Memorium
I’d just like to close off by mentioning Joshua Brown, the victim, again.
By all accounts Joshua Brown loved technology, and he particularly loved his Tesla, who he nicknamed, “Tessy”.
His family has quietly indicated that they will await the accident report before they decide whether or not to bring legal action related to the crash. I think that speaks to Brown’s commitment to self-driving car technology, even though it contributed to his death.