Academia to the Auto Industry

Baidu recently announced that it will be releasing a mass-market autonomous vehicle by 2021, shifting plans from its previous stated intention of building self-driving buses limited to well-defined routes.

Interestingly, Baidu has invested in Uber, and has stated their interest in ride-sharing partnerships. They also claim to be testing their autonomous vehicles on the road in China already.

One angle of Baidu that is especially interesting to me is their employment of Andrew Ng as their chief scientist and one of leaders of their autonomous vehicle effort.

Ng has a lot of accomplishments under his belt for a 40-year-old. He earned tenure as a computer science professor at Stanford, he co-founded the online learning company Coursera, and he is now the chief scientist at Baidu.

I took Ng’s machine learning course on Coursera, and it was terrific. He’s a great a teacher. But, as I understand that, he left academia behind to build production software at Baidu.

This is something of a trend. Google’s autonomous vehicle efforts were built by Sebastian Thrun, another Stanford computer science professor. Uber’s autonomous vehicle program largely consists of buying out the professors and scientists at Carnegie Mellon University’s vaunted robotics lab.

It’s rare for tenured professors to leave academia for industry, but it’s happened a few times now in the autonomous vehicle industry. I can’t help but wonder if we’ll see more.

Self-Driving Racecar

For years, Stanford’s Chris Gerdes has been working with students to build a self-driving race car.

The car recently hit speeds of 120mph at Thunderhill Raceway in Willows, California, and the video shows what it looks like to have a car weave around a track with nobody at the wheel.

Of course, a racetrack lacks many of the variables and obstacles that cars encounter in real life. But raw performance is important, particularly since I dream of one day commuting in self-driving cars at 300mph 🙂

Autonomous Driving Ethics

Chris Gerdes is a Stanford engineering professor working on driverless race cars. I imagine he’s doing some pretty neat technological work, but he’s made the press recently for a more philosophical reason — the ethics of driverless cars.

Bloomberg doesn’t do a terrific job raising the different ethical issues that might arise for a robot driver, but it gets the ball rolling and it’s not hard to imagine from there:

Take that double-yellow line problem. It is clear that the car should cross it to avoid the road crew. Less clear is how to go about programming a machine to break the law or to make still more complex ethical calls.

One potential dilemma, for example, is how to program for the famous trolley problem. If a computer has to make a decision between staying on course and killing 5 people, or veering off the road and killing one pedestrian, what do we program the computer to do?

What if it’s a 25% chance of killing 5 people against the certainty of killing one person?

These are pretty extreme examples, but even the more mundane decisions aren’t entirely clear. Should driver-less cars adhere rigidly to the speed limit? 5 m.p.h. in parking garages?

What if another driver motions at the car to proceed out of order through a stop sign?

What about a late merge that requires crossing a solid white line?

I don’t expect these would be insurmountable issues, but they will make explicit the extent to which we implicitly assume the violation of our traffic laws.


Originally published at www.davidincalifornia.com on October 12, 2015.