One of the revelations from CS373: Artificial Intelligence for Robotics, is the extent to which autonomous driving technology uses a hybrid of global mapping and local sensing.
So, for example, when if a car wants to drive from Los Angeles to San Francisco, it basically outsources the mapping functions to Google Maps, and only uses local computation for visual horizon driving.
This simplifies the software and allows the robotics wizards to focus just on local issues.
Itās one of those things thatās obvious once itās explained but kind of revelatory.
Chris Gerdes is a Stanford engineering professor working on driverless race cars. I imagine heās doing some pretty neat technological work, but heās made the press recently for a more philosophical reasonāāāthe ethics of driverless cars.
Bloombergdoesnāt do a terrific job raising the different ethical issues that might arise for a robot driver, but it gets the ball rolling and itās not hard to imagine from there:
Take that double-yellow line problem. It is clear that the car should cross it to avoid the road crew. Less clear is how to go about programming a machine to break the law or to make still more complex ethical calls.
One potential dilemma, for example, is how to program for the famous trolley problem. If a computer has to make a decision between staying on course and killing 5 people, or veering off the road and killing one pedestrian, what do we program the computer to do?
What if itās a 25% chance of killing 5 people against the certainty of killing one person?
These are pretty extreme examples, but even the more mundane decisions arenāt entirely clear. Should driver-less cars adhere rigidly to the speed limit? 5 m.p.h. in parking garages?
What if another driver motions at the car to proceed out of order through a stop sign?
What about a late merge that requires crossing a solid white line?
I donāt expect these would be insurmountable issues, but they will make explicit the extent to which we implicitly assume the violation of our traffic laws.
Of these, the taxis in Japan seem by far the most interesting, from a riderās point of voice. The engineering involved in all five of them is spectacular of course.
But the benefits of self-driving cars will only really arrive when they move beyond being āmonorails on wheelsā and they become personal vehicles to go wherever we want.
Swedish automotive group Volvo Cars Wednesday urged U.S. federal authorities to impose nationwide guidelines for self-driving cars, vowing to accept full liability should one if its cars be involved in an accident while in autonomous mode.
Musk, who spoke Tuesday at the Automotive News World Congress conference, said he expects the lack of clear federal regulations covering self-driving cars could delay their introduction until 2022 or 2023.
These are smart guys, and Iām sure theyāre aware of Uberās success vis a vis regulators.
To at least some extent, presumably they are angling for regulations that will reduce their own liability or box competitors out of the market.
But, if companies can get driver-less cars into the hands of consumers before the regulations clamp down, then those consumers will wind up powering the driver-less car lobby.
To take just one example, a key market segment for driver-less cars will be the elderly. And the elderly vote. A lot.
The strongest case for self-driving cars is safety, its logical, programmed movement also means vehicles can be centrally controlled, rerouting traffic away from congestion. Since the project started in 2009, Google has driven most of its 1.2m hours of tests in a small fleet of customised Lexus autonomous cars. As of July this year, there had been 14 accidents but all had been caused by human error, not by the software. Around 33,000 people die in traffic accidents in the US every year; Google says self-driving cars will reduce that number significantly. The opportunities are, indisputably, immense.
The hard sell for Google will be winning over generations of people who feel safer being in control of their vehicle, donāt know or care enough about the technology, or who simply enjoy driving. Yet most people who try a demo say the same thing: how quickly the self-driving car feels normal, and safe. As the head of public policy quipped, āperhaps we just need to do demos for 7 billion peopleā. Googleās systems engineer Jaime Waydo helped put self-driving cars on Mars while she worked at Nasa; it may well be that regulation and public policy prove easier there than on Earth.
I think the article gets the sale wrong. While safety may be important for regulators, I am doubtful that it will be an important sell for consumers, at least initially.
Early adopters tend to be people who have an overwhelming interest in technology, or a strong need for the product to solve a specific pain point. Safety is rarely a pain point until itās too late.
I think the strongest case for self-driving cars will be helping people who place a lot of value on mobility but either cannot drive or place a high negative value on the act of driving. That will be caregivers, companies that donāt want to pay drivers, and commuters.
Thrun is an Elon Musk-type, who has been wildly successful in a number of disparate domainsāāāStanford professor, father of the self-driving car, Udacity CEO. Thereās a lot to say about Thrun on another occasion, but here Iāll focus on the Udacity robotics course.
This is the first course I have taken on the Udacity platform, and I am really impressed by what they have put together. The format is a big advance over the lectures I listened to in college.
For one, Thurn has moved way beyond putting his PowerPoint slides into a YouTube video and doing a voiceover. Instead, Thrun is basically doing a very polished whiteboard presentation, specially crafted for the Udacity format. Which means weāre not looking at Thurn standing at a whiteboard, but rather weāre looking at his hand (or that of a hand model), drawing out well-contained lessons.
But the big step forward is the constant quiz and feedback mode. Every 1ā2 minutes, Thrun will ask a quiz questions, to verify weāre still following along. Sometimes itās a multiple-choice question; often itās a toy programming problem which requires we write 2ā5 lines of Python in the context of a larger program that he gives us.
Thrun is very enthusiastic, constantly telling us how amazing and remarkable we are as students, to have so quickly programmed up a toy version of the Google self-driving car localization algorithm.
In reality, I think it is Thrun who has built something quite remarkable.
Uber CEO Travis Kalanick has been vocal about the companyās desire to move away from human drivers and toward self-driving cars, as soon as possible.
That day is still in the future, though, and for the moment, Uber is stuck in a globe-spanning collection of fights with taxi commissions and city governments. Uber has mostly been able to win these fights.
But presumably the advent of self-driving cars will lead to round two of these regulatory battles, and with the current Uber drivers standing in opposition to self-driving machines.
I hope and expect the forward march of progress to continue, but it is ironic that in order to prevail today, Uber is setting up a potential problem for tomorrow.
The Washington Post reports today on the growing inequality between auto fatality rates for the highly-educated and less-educated in America.
The article is itself reporting on an academic paper (gated) that finds:
Adjusted death rates were 15.3 per 100,000 population (95% confidence interval (CI): 10.7, 19.9) higher at the bottom of the education distribution than at the top of the education distribution in 1995, increasing to 17.9 per 100,000 population (95% CI: 14.8, 21.0) by 2010. In relative terms, adjusted death rates were 2.4 (95% CI: 1.7, 3.0) times higher at the bottom of the education distribution than at the top in 1995, increasing to 4.3 times higher (95% CI: 3.4, 5.3) by 2010. Inequality increases were larger in terms of vehicle-miles traveled. Although overall MVA death rates declined during this period, socioeconomic differences in MVA mortality have persisted or worsened over time.
First things first, death rates declined overall, which is great news.
The disparity across educational classes is troubling, but there doesnāt seem to be a solid explanation. Seat belt usage, automobile model year and safety features, drinking, and other behavioral issues are among the possible culprits.
The Post points out that self-driving vehicles will make the disparity even greater in the near term (assuming self-driving cars are safer than human-driven cars). They do not highlight that this, too, is a good thing. Fewer deaths are better, even if the reduction comes at the higher end of the educational distribution.
My hope, though, is that self-driving cars become so ubiquitous so quickly that the disparity goes to zero, sooner rather than later.
S auto sales are booming, which provides money for R&D. Along those lines, GM has just announced that Super-Cruise hands-free driving will appear in Cadillacs next year.
This is a great step forward technologically, although itās unclear how important this highway-only, handsfree mode will be to consumers.
āItās going to be a creep, itās not going to be a mind-bending thing,ā said GMās product development chief Mark Reuss earlier this year. āI donāt think youāre going to see an autonomous vehicle take over the city anytime soon.ā
Iām reminded a little bit of the first touch-screen phones. When I worked at mSpot, I managed a few of our products that ported to the Samsung Instinct, which was pretty buggy and not so functional.
Anyone judging the future of smartphones by using the Instinct could have been forgiven for doubting the whole endeavor.
But the iPhone the phones improved rapidly, due to competition and consumer demand, and by 2010 nobody doubted the importance of smartphones.
I wouldnāt be surprised to see a similar story play out with the first autonomous driving systems.