Companies Working on Self-Driving Cars

TechRadar has a pretty good slideshow, outlining the major players in the self-driving car space:

  1. Google
  2. Uber
  3. Tesla
  4. Honda
  5. Mercedes-Benz
  6. BMW
  7. Audi
  8. Delphi
  9. Apple

They make the interesting point that Uber is the most secretive of these companies, at least as far as its self-driving car technology. Although there was a very public uproar when they partnered with, then hired away, Carnegie Mellon’s team, very little actual technology news has come out of the company. That doesn’t mean they’re not advancing fast, of course.


Originally published at www.davidincalifornia.com on October 14, 2015.

Hybrid Search

One of the revelations from CS373: Artificial Intelligence for Robotics, is the extent to which autonomous driving technology uses a hybrid of global mapping and local sensing.

So, for example, when if a car wants to drive from Los Angeles to San Francisco, it basically outsources the mapping functions to Google Maps, and only uses local computation for visual horizon driving.

This simplifies the software and allows the robotics wizards to focus just on local issues.

It’s one of those things that’s obvious once it’s explained but kind of revelatory.


Originally published at www.davidincalifornia.com on October 13, 2015.

Autonomous Driving Ethics

Chris Gerdes is a Stanford engineering professor working on driverless race cars. I imagine he’s doing some pretty neat technological work, but he’s made the press recently for a more philosophical reason — the ethics of driverless cars.

Bloomberg doesn’t do a terrific job raising the different ethical issues that might arise for a robot driver, but it gets the ball rolling and it’s not hard to imagine from there:

Take that double-yellow line problem. It is clear that the car should cross it to avoid the road crew. Less clear is how to go about programming a machine to break the law or to make still more complex ethical calls.

One potential dilemma, for example, is how to program for the famous trolley problem. If a computer has to make a decision between staying on course and killing 5 people, or veering off the road and killing one pedestrian, what do we program the computer to do?

What if it’s a 25% chance of killing 5 people against the certainty of killing one person?

These are pretty extreme examples, but even the more mundane decisions aren’t entirely clear. Should driver-less cars adhere rigidly to the speed limit? 5 m.p.h. in parking garages?

What if another driver motions at the car to proceed out of order through a stop sign?

What about a late merge that requires crossing a solid white line?

I don’t expect these would be insurmountable issues, but they will make explicit the extent to which we implicitly assume the violation of our traffic laws.


Originally published at www.davidincalifornia.com on October 12, 2015.

How Many Companies Are Developing Self-Driving Cars?

In an article about self-driving car accidents, Engadget, makes an interesting side observation:

However, there are no less than ten companies testing self-driving vehicles in the state, and Apple is at least considering entering the fray.

So which companies are testing self-driving cars?

Here is the California DMV list:

  • Volkswagen Group of America
  • Mercedes Benz
  • Google
  • Delphi Automotive
  • Tesla Motors
  • Bosch
  • Nissan
  • Cruise Automation
  • BMW
  • Honda

That comes out to 5 traditional automakers, plus Tesla, plus two auto-part suppliers, plus Google, plus one start-up.


Originally published at www.davidincalifornia.com on October 9, 2015.

Where to Ride an Autonomous Vehicle

Tech Insider lists five places:

  • Between two towns in the Netherlands
  • A bus in China
  • Taxis in Japan
  • Around an office park in California
  • Somewhere in Finland

Of these, the taxis in Japan seem by far the most interesting, from a rider’s point of voice. The engineering involved in all five of them is spectacular of course.

But the benefits of self-driving cars will only really arrive when they move beyond being “monorails on wheels” and they become personal vehicles to go wherever we want.


Originally published at www.davidincalifornia.com on October 8, 2015.

Ask for Forgiveness: Autonomous Vehicle Addition

Uber is famously succeeding with the ask-for-forgivness-not-permission approach.

A lot of automakers may wind up borrowing that page from the Uber.

Volvo just put out a call for the US to regulate self-driving cars:

Swedish automotive group Volvo Cars Wednesday urged U.S. federal authorities to impose nationwide guidelines for self-driving cars, vowing to accept full liability should one if its cars be involved in an accident while in autonomous mode.

Elon Musk has also voiced concern over the lack of regulation:

Musk, who spoke Tuesday at the Automotive News World Congress conference, said he expects the lack of clear federal regulations covering self-driving cars could delay their introduction until 2022 or 2023.

These are smart guys, and I’m sure they’re aware of Uber’s success vis a vis regulators.

To at least some extent, presumably they are angling for regulations that will reduce their own liability or box competitors out of the market.

But, if companies can get driver-less cars into the hands of consumers before the regulations clamp down, then those consumers will wind up powering the driver-less car lobby.

To take just one example, a key market segment for driver-less cars will be the elderly. And the elderly vote. A lot.


Originally published at www.davidincalifornia.com on October 7, 2015.

The Sell

The Guardian recently ran a piece entitled, “Self-driving cars: safe, reliable — but a challenging sell for Google“:

The strongest case for self-driving cars is safety, its logical, programmed movement also means vehicles can be centrally controlled, rerouting traffic away from congestion. Since the project started in 2009, Google has driven most of its 1.2m hours of tests in a small fleet of customised Lexus autonomous cars. As of July this year, there had been 14 accidents but all had been caused by human error, not by the software. Around 33,000 people die in traffic accidents in the US every year; Google says self-driving cars will reduce that number significantly. The opportunities are, indisputably, immense.

The hard sell for Google will be winning over generations of people who feel safer being in control of their vehicle, don’t know or care enough about the technology, or who simply enjoy driving. Yet most people who try a demo say the same thing: how quickly the self-driving car feels normal, and safe. As the head of public policy quipped, “perhaps we just need to do demos for 7 billion people”. Google’s systems engineer Jaime Waydo helped put self-driving cars on Mars while she worked at Nasa; it may well be that regulation and public policy prove easier there than on Earth.

I think the article gets the sale wrong. While safety may be important for regulators, I am doubtful that it will be an important sell for consumers, at least initially.

Early adopters tend to be people who have an overwhelming interest in technology, or a strong need for the product to solve a specific pain point. Safety is rarely a pain point until it’s too late.

I think the strongest case for self-driving cars will be helping people who place a lot of value on mobility but either cannot drive or place a high negative value on the act of driving. That will be caregivers, companies that don’t want to pay drivers, and commuters.

And that’s a pretty huge market.


Originally published at www.davidincalifornia.com on October 6, 2015.

Ride-Sharing and Self-Driving Cars

Uber CEO Travis Kalanick has been vocal about the company’s desire to move away from human drivers and toward self-driving cars, as soon as possible.

That day is still in the future, though, and for the moment, Uber is stuck in a globe-spanning collection of fights with taxi commissions and city governments. Uber has mostly been able to win these fights.

But presumably the advent of self-driving cars will lead to round two of these regulatory battles, and with the current Uber drivers standing in opposition to self-driving machines.

I hope and expect the forward march of progress to continue, but it is ironic that in order to prevail today, Uber is setting up a potential problem for tomorrow.


Originally published at www.davidincalifornia.com on October 5, 2015.

CS373

I just started CS373: Artifical Intelligence for Robotics, which is Sebastian Thrun‘s robot car course on Udacity.

Thrun is an Elon Musk-type, who has been wildly successful in a number of disparate domains — Stanford professor, father of the self-driving car, Udacity CEO. There’s a lot to say about Thrun on another occasion, but here I’ll focus on the Udacity robotics course.

This is the first course I have taken on the Udacity platform, and I am really impressed by what they have put together. The format is a big advance over the lectures I listened to in college.

For one, Thurn has moved way beyond putting his PowerPoint slides into a YouTube video and doing a voiceover. Instead, Thrun is basically doing a very polished whiteboard presentation, specially crafted for the Udacity format. Which means we’re not looking at Thurn standing at a whiteboard, but rather we’re looking at his hand (or that of a hand model), drawing out well-contained lessons.

But the big step forward is the constant quiz and feedback mode. Every 1–2 minutes, Thrun will ask a quiz questions, to verify we’re still following along. Sometimes it’s a multiple-choice question; often it’s a toy programming problem which requires we write 2–5 lines of Python in the context of a larger program that he gives us.

Thrun is very enthusiastic, constantly telling us how amazing and remarkable we are as students, to have so quickly programmed up a toy version of the Google self-driving car localization algorithm.

In reality, I think it is Thrun who has built something quite remarkable.


Originally published at www.davidincalifornia.com on October 5, 2015.

Car Crash Inequality

The Washington Post reports today on the growing inequality between auto fatality rates for the highly-educated and less-educated in America.

The article is itself reporting on an academic paper (gated) that finds:

Adjusted death rates were 15.3 per 100,000 population (95% confidence interval (CI): 10.7, 19.9) higher at the bottom of the education distribution than at the top of the education distribution in 1995, increasing to 17.9 per 100,000 population (95% CI: 14.8, 21.0) by 2010. In relative terms, adjusted death rates were 2.4 (95% CI: 1.7, 3.0) times higher at the bottom of the education distribution than at the top in 1995, increasing to 4.3 times higher (95% CI: 3.4, 5.3) by 2010. Inequality increases were larger in terms of vehicle-miles traveled. Although overall MVA death rates declined during this period, socioeconomic differences in MVA mortality have persisted or worsened over time.

First things first, death rates declined overall, which is great news.

The disparity across educational classes is troubling, but there doesn’t seem to be a solid explanation. Seat belt usage, automobile model year and safety features, drinking, and other behavioral issues are among the possible culprits.

The Post points out that self-driving vehicles will make the disparity even greater in the near term (assuming self-driving cars are safer than human-driven cars). They do not highlight that this, too, is a good thing. Fewer deaths are better, even if the reduction comes at the higher end of the educational distribution.

My hope, though, is that self-driving cars become so ubiquitous so quickly that the disparity goes to zero, sooner rather than later.

H/T Tyler Cowen


Originally published at www.davidincalifornia.com on October 2, 2015.