Mcity

Ford is testing its autonomous vehicles in a simulated city in Michigan, named “Mcity”.

“Every mile driven there can represent 10, 100 or 1,000 miles of on-road driving in terms of our ability to pack in the occurrences of difficult events.”

Of course, note that this is similar to the difference between testing in a test harness, and testing in the real world, where users and the environment crazy things that the test designers never imagined.

Nonetheless, it’s an advantage that Michigan has over Silicon Valley when it comes to developing products for the non-digital world. Mcity is 32 acres dedicated to autonomous vehicle testing, and that kind of acreage is hard to come by in Silicon Valley.


Originally published at www.davidincalifornia.com on November 17, 2015.

Federalism

Nevada:

With more states embracing autonomous cars and the hype surrounding next-stage vehicles increasing exponentially, Nevada wants to protect its lead on autonomous testing.

California:

“The worst thing would be for California, sort of the birth state of this technology, to accidentally sort of shut things down,” Sarah Hunter, public policy director at the experimental lab Google spun off to focus on ambitious projects such as self-driving cars and Internet-beaming balloons, said at a public presentation in September.

Texas:

Over the summer, Google expanded its road testing from Silicon Valley to Texas, where state law would not prohibit cars without pedals and a wheel. Some within California’s DMV wondered whether Google’s move was motivated by frustration with its home state.


Originally published at www.davidincalifornia.com on November 16, 2015.

Inter-Vehicular Communication

One of the dreams of autonomous vehicles is the possibility of inter-vehicular communication, well beyond what is currently possible.

For example, when a stoplight turns green, all of the cars waiting in line could accelerate at the same time, having communicated that it is safe to do so. Contrast this with human drivers, each of whom must watch for the acceleration of the next driver, before accelerating their own vehicles.

However, there is some level of human-to-human driver interaction, and autonomous vehicles are potentially having trouble coping with this.

Think, for example, of arriving at a four-way stop, simultaneous to another car.

Theoretically, when cars arrive simultaneously, the left-most car has the right-of-way.

Practically, however, cars never arrive exactly simultaneously, and nobody pays attention to the left-hand rule anyway. Usually, one driver takes the initiative, or perhaps one driver waves another driver forward, ceding the right-of-way.

According to Melissa Cefkin at Nissan, these situations are tricky for computers to navigate:

Intersections present a particular challenge, said Melissa Cefkin, who is based at Nissan’s Silicon Valley research centre.

“Sometimes drivers communicate between themselves and with pedestrians or cyclists directly, by swapping looks, with a hand gesture, or even verbally,” she said.

“Sometimes it’s interpretative: we look for signals while judging the vehicle’s speed and movements.”

The tiny pointers that motorists pick up from one another are not yet within the reach of the technology.

“Currently, the machine isn’t capable of grasping all the subtlety of these clues,” Cefkin said.

Increasingly, it looks like one of the sticking points for driverless cars will be the situations in which driverless cars have to interact with human drivers.

This isn’t that surprising. Studies show that drivers are safer violating the speed limit and keeping up with traffic, rather than adhering to the speed limit and going at a different speed than everyone else.

The interactions between human and computer drivers seems like a variation on that.


Originally published at www.davidincalifornia.com on November 15, 2015.

The Cost of Being an OEM

A number of stories have recently surfaced, positing that Tesla will have to burn a lot of cash to stay in the auto manufacturing business:

Tesla Motors Inc. will continue to burn through large amounts of cash in its quest to become a bigger car maker, and Wall Street may be underestimating how much spending is still to come, analysts at Barclays said in a note Friday.

Tesla TSLA, -2.70% doesn’t have a good track record in spending efficiently, and its business strategy will keep it a capital-intensive company, the analysts said. They estimated Tesla, which has consistently lost money, will go through $11 billion in capital spending over the next five years.

This, of course, contrasts with Google’s business model, which is to focus on software and leave the manufacturing to others.

I’ve always wondered why the price of (standard, human-driven) cars hasn’t fallen further. What are the costs of making a car? This Quora answer is short, so I’ll post it in its entirety:

OEMs (e.g. Ford, GM, VW etc) do not make car parts. What they do is the final assembly at their JIT [DS: I assume this stands for Just-In-Time] plants.

So the basic costs associated making a vehicle are:

-payments to auto parts suppliers (overhead console, flooring, door panels, electric wires-pretty much everything 🙂 )

-payments to auto part makers investment (mould and stamping machines etc)

-logistic costs

-SG&A of an OEM

Diving in a little further, Ford’s most recent Form 10-K shows that ~88% of their costs fall under “Automotive cost of sales”, which is accounting-speak for the costs of producing cars. Actually, that probably understates the case a bit, because Ford also has a small financial arm, and some of the remaining costs are attributable to that.

Of course, “Automotive cost of sales” encompasses the first three bullet points above, and the 10-K doesn’t have enough information (at least upon a quick scan) to break down the costs further.

It would be interesting to know more about where Tesla has the opportunity to wring costs out of the system.


Originally published at www.davidincalifornia.com on November 14, 2015.

Google Wins the Internet

A Google self-driving car was pulled over, for driving too slow.

Mostly this is just funny. But it does raise questions about how driving incentives (as opposed to just skill) will differ when humans cede control to machines — particularly machines programmed by other people.

Maybe I want to speed in order to get to a meeting, but Google doesn’t particularly want me to do that. Who gets final say?


Originally published at www.davidincalifornia.com on November 13, 2015.

Uber and TomTom

Uber and TomTom just inked a partnership.

Think of this as Uber diversifying it’s risks on the margin of mapping. Now Uber partners with Alphabet/Google, Apple, and TomTom.

It also highlights the complicated relationships at the automobile-technology intersection, particularly when it comes to giant companies like Alphabet and Apple.

A few weeks ago, I published a Friends and Enemies matrix, laying out the landscape for autonomous vehicles.

In that Matrix, I marked Uber as friends with Apple and Alphabet/Google.

Maybe that isn’t quite right.

I was thinking largely of Apple and Google as potential autonomous technology suppliers to Uber’s car network.

However, both Apple and Google touch Uber at a several different points, which complicates the relationships between the companies. It’s certainly conceivable that Uber could have a positive relationship with one division of Apple or Google, and an acrimonious and competitive relationship with another division.

Autonomous Driving: All three companies are developing self-driving technology, but for different reasons. Uber is motivated to lower the costs and increase the scalability of its transportation network. Apple is looking to sell vehicles. Google would like to become the operating system of all vehicles.

Mapping: Uber utilizes mapping technology provided by Apple and Google, and now by TomTom, as well. There are also reports of Uber starting its own mapping effort.

Mobile OS: Uber relies exclusively on Apple’s iOS and Alphabet’s Android for Uber customers to hail rides. Ditto for Uber driver apps, which are also where the mapping comes in (at least for now).

There are probably a few other margins along which Google, and maybe Apple, touch Uber. I wouldn’t be surprised if Uber uses Apple Macbooks to do work powered by Google Apps for Business. It makes for very complex relationships.

Also, and like my note about NVIDIA yesterday, it’s a little hard to figure out when to use “Alphabet” and when to use “Google”. Maybe that will clarify over time.


Originally published at www.davidincalifornia.com on November 12, 2015.

NVIDIA Jetson TX1

NVIDIA recently announced the new Jetson TX1 unit.

They bill it as “a supercomputer on a module that’s the size of a credit card”.

NVIDIA is targeting the unit principally at autonomous vehicles, and also medical imaging, which presumably tackles a lot of similar computer vision issues.

The last few years have seen a deceleration in the mobile phone market, as phone manufacturers and app developers have had a harder time figuring out how to improve the smartphone.

I think we will see the converse in the autonomous vehicle market, and the Jetson TX1 is an example of that. In the robotics market, there is a lot more room for improvement, and a greater number of currently-binding technological constraints that can be relaxed.

As a side note, I always waffle on how to spell in NVIDIA, which can appear in the press as “NVIDIA”, “Nvidia”, “nVidia”, or “nVIDIA”. Since NVIDIA’s own website seems to be leaning toward the “NVIDIA” styling, I’ll go with that.


Originally published at www.davidincalifornia.com on November 11, 2015.

Human-Machine Interaction

In Wired, Alex Davies compares the self-driving approaches of Google and Ford, and finds them philosophically similar.

Davies compares the two companies’ approaches in light of the NHTSA definition of autonomous driving. The NHTSA definition is lengthy, but Wikipedia has a concise summary:

In the United States, the National Highway Traffic Safety Administration (NHTSA) has proposed a formal classification system:[9]

Level 0: The driver completely controls the vehicle at all times. Level 1: Individual vehicle controls are automated, such as electronic stability control or automatic braking. Level 2: At least two controls can be automated in unison, such as adaptive cruise control in combination with lane keeping. Level 3: The driver can fully cede control of all safety-critical functions in certain conditions. The car senses when conditions require the driver to retake control and provides a “sufficiently comfortable transition time” for the driver to do so. Level 4: The vehicle performs all safety-critical functions for the entire trip, with the driver not expected to control the vehicle at any time. As this vehicle would control all functions from start to stop, including all parking functions, it could include unoccupied cars.

According to Davies, Level 3 presents significant challenges not present at any other level. Those challenges relate to on-the-fly communication between the driver and the car. Plausibly enough, if the car reaches its limits and needs to pass control to the driver in an emergency, that can be pretty dicey.

Audi says its tests show it takes an average of 3 to 7 seconds, and as long as 10, for a driver to snap to attention and take control, even with flashing lights and verbal warnings.

A lot can happen in that time — a car traveling 60 mph covers 88 feet per second — and automakers have different ideas for solving this problem. Audi has an elegant, logical human machine interface. Volvo is creating its own HMI, and says it will accept full liability for its cars while in autonomous mode.

Google’s opting out of this dilemma. So is Ford.

Perhaps the incrementalist approach is not a winner, after all.


Originally published at www.davidincalifornia.com on November 10, 2015.