Google and Ford

Google and Ford are going to build self-driving cars together!

Or so say several unnamed sources.

The news is apparently leaking in advance of the Consumer Electronics Show in January, where the announcement will be made official.

Ford has been building out its own city — MCity — to test self-driving cars, so they are an obvious choice for Google.

It will be interesting to see whether Ford manages to lock up Google’s software — similar to the way AT&T locked up the iPhone for several years after launch — or whether Ford is merely one of many partners with whom Google works.


Originally published at www.davidincalifornia.com on December 22, 2015.

Wanted: LIDAR Engineer

Business Insider reports that Google is hiring a LIDAR engineer.

Although BI reports this breathlessly as, “Google and parent company Alphabet are not leaving such a key ingredient in someone else’s hands,” it seems a little less earth-shattering than that. To be fair, BI eventually concedes that possibility, as well.

Any large organization that depends on suppliers for key parts is going to have internal specialists in those parts. When I worked at in AOL’s data center, years ago, we had all sorts of routing and switching engineers, despite the fact that AOL never had any desire to build its own routing a switching hardware.

I think the more intriguing, if less newsworthy conclusion here is that Google is gradually looking more and more like it’s going to have a real autonomous vehicle business, and not just a moon-shot lab project.


Originally published at www.davidincalifornia.com on December 14, 2015.

Google Wins the Internet

A Google self-driving car was pulled over, for driving too slow.

Mostly this is just funny. But it does raise questions about how driving incentives (as opposed to just skill) will differ when humans cede control to machines — particularly machines programmed by other people.

Maybe I want to speed in order to get to a meeting, but Google doesn’t particularly want me to do that. Who gets final say?


Originally published at www.davidincalifornia.com on November 13, 2015.

Human-Machine Interaction

In Wired, Alex Davies compares the self-driving approaches of Google and Ford, and finds them philosophically similar.

Davies compares the two companies’ approaches in light of the NHTSA definition of autonomous driving. The NHTSA definition is lengthy, but Wikipedia has a concise summary:

In the United States, the National Highway Traffic Safety Administration (NHTSA) has proposed a formal classification system:[9]

Level 0: The driver completely controls the vehicle at all times. Level 1: Individual vehicle controls are automated, such as electronic stability control or automatic braking. Level 2: At least two controls can be automated in unison, such as adaptive cruise control in combination with lane keeping. Level 3: The driver can fully cede control of all safety-critical functions in certain conditions. The car senses when conditions require the driver to retake control and provides a “sufficiently comfortable transition time” for the driver to do so. Level 4: The vehicle performs all safety-critical functions for the entire trip, with the driver not expected to control the vehicle at any time. As this vehicle would control all functions from start to stop, including all parking functions, it could include unoccupied cars.

According to Davies, Level 3 presents significant challenges not present at any other level. Those challenges relate to on-the-fly communication between the driver and the car. Plausibly enough, if the car reaches its limits and needs to pass control to the driver in an emergency, that can be pretty dicey.

Audi says its tests show it takes an average of 3 to 7 seconds, and as long as 10, for a driver to snap to attention and take control, even with flashing lights and verbal warnings.

A lot can happen in that time — a car traveling 60 mph covers 88 feet per second — and automakers have different ideas for solving this problem. Audi has an elegant, logical human machine interface. Volvo is creating its own HMI, and says it will accept full liability for its cars while in autonomous mode.

Google’s opting out of this dilemma. So is Ford.

Perhaps the incrementalist approach is not a winner, after all.


Originally published at www.davidincalifornia.com on November 10, 2015.

Hybrid Search

One of the revelations from CS373: Artificial Intelligence for Robotics, is the extent to which autonomous driving technology uses a hybrid of global mapping and local sensing.

So, for example, when if a car wants to drive from Los Angeles to San Francisco, it basically outsources the mapping functions to Google Maps, and only uses local computation for visual horizon driving.

This simplifies the software and allows the robotics wizards to focus just on local issues.

It’s one of those things that’s obvious once it’s explained but kind of revelatory.


Originally published at www.davidincalifornia.com on October 13, 2015.

Sergey Brin

Yesterday, Sergey Brin made a surprise appearance at a press event for Google’s self-driving car.

USA Today doesn’t report any notable quotes or announcements, but this is still a great sign for self-driving cars.

Brin is famously press-averse, so to see him lavish this much attention both on the automotive group and furthermore on a press event, can only help focus resources on this area.


Originally published at www.davidincalifornia.com on September 30, 2015.

The Vehicular Turing Test

CNET has a mostly speculative article reporting that in certain cases, Google is training it’s self-driving software to behave more like human drivers.

But which rules of the road is Google prepared to break and which ones will be all too much for its righteous soul? It will now cross double-yellow lines to avoid a car that’s, say, double-parked and blocking its path.

This is an interesting variant on the Turing Test. And do we even want self-driving cars to pass?

After all, to drive in a manner indistinguishable from a human probably means allowing for some unnecessary probability of fatal accident.

There is a wide range of driving ability among humans. If self-driving cars are indistinguishable from the best human drivers, how much less safe are they than if they are programmed to drive perfectly?


Originally published at www.davidincalifornia.com on September 29, 2015.