Ford is testing its autonomous vehicles in a simulated city in Michigan, named âMcityâ.
âEvery mile driven there can represent 10, 100 or 1,000 miles of on-road driving in terms of our ability to pack in the occurrences of difficult events.â
Of course, note that this is similar to the difference between testing in a test harness, and testing in the real world, where users and the environment crazy things that the test designers never imagined.
Nonetheless, itâs an advantage that Michigan has over Silicon Valley when it comes to developing products for the non-digital world. Mcity is 32 acres dedicated to autonomous vehicle testing, and that kind of acreage is hard to come by in Silicon Valley.
With more states embracing autonomous cars and the hype surrounding next-stage vehicles increasing exponentially, Nevada wants to protect its lead on autonomous testing.
âThe worst thing would be for California, sort of the birth state of this technology, to accidentally sort of shut things down,â Sarah Hunter, public policy director at the experimental lab Google spun off to focus on ambitious projects such as self-driving cars and Internet-beaming balloons, said at a public presentation in September.
Texas:
Over the summer, Google expanded its road testing from Silicon Valley to Texas, where state law would not prohibit cars without pedals and a wheel. Some within Californiaâs DMV wondered whether Googleâs move was motivated by frustration with its home state.
Why? Because Tesla is moving in and apartment owners anticipate an increase in demand.
This is ironic, since the long-run effect of self-driving cars will be to greatly expand feasible commuting distances and thus lower demand in most (all?) locations.
One of the dreams of autonomous vehicles is the possibility of inter-vehicular communication, well beyond what is currently possible.
For example, when a stoplight turns green, all of the cars waiting in line could accelerate at the same time, having communicated that it is safe to do so. Contrast this with human drivers, each of whom must watch for the acceleration of the next driver, before accelerating their own vehicles.
However, there is some level of human-to-human driver interaction, and autonomous vehicles are potentially having trouble coping with this.
Think, for example, of arriving at a four-way stop, simultaneous to another car.
Theoretically, when cars arrive simultaneously, the left-most car has the right-of-way.
Practically, however, cars never arrive exactly simultaneously, and nobody pays attention to the left-hand rule anyway. Usually, one driver takes the initiative, or perhaps one driver waves another driver forward, ceding the right-of-way.
Intersections present a particular challenge, said Melissa Cefkin, who is based at Nissanâs Silicon Valley research centre.
âSometimes drivers communicate between themselves and with pedestrians or cyclists directly, by swapping looks, with a hand gesture, or even verbally,â she said.
âSometimes itâs interpretative: we look for signals while judging the vehicleâs speed and movements.â
The tiny pointers that motorists pick up from one another are not yet within the reach of the technology.
âCurrently, the machine isnât capable of grasping all the subtlety of these clues,â Cefkin said.
Increasingly, it looks like one of the sticking points for driverless cars will be the situations in which driverless cars have to interact with human drivers.
This isnât that surprising. Studies show that drivers are safer violating the speed limit and keeping up with traffic, rather than adhering to the speed limit and going at a different speed than everyone else.
The interactions between human and computer drivers seems like a variation on that.
A number of stories have recently surfaced, positing that Tesla will have to burn a lot of cash to stay in the auto manufacturing business:
Tesla Motors Inc. will continue to burn through large amounts of cash in its quest to become a bigger car maker, and Wall Street may be underestimating how much spending is still to come, analysts at Barclays said in a note Friday.
Tesla TSLA, -2.70% doesnât have a good track record in spending efficiently, and its business strategy will keep it a capital-intensive company, the analysts said. They estimated Tesla, which has consistently lost money, will go through $11 billion in capital spending over the next five years.
This, of course, contrasts with Googleâs business model, which is to focus on software and leave the manufacturing to others.
Iâve always wondered why the price of (standard, human-driven) cars hasnât fallen further. What are the costs of making a car? This Quora answer is short, so Iâll post it in its entirety:
OEMs (e.g. Ford, GM, VW etc) do not make car parts. What they do is the final assembly at their JIT [DS: I assume this stands for Just-In-Time] plants.
So the basic costs associated making a vehicle are:
-payments to auto parts suppliers (overhead console, flooring, door panels, electric wires-pretty much everything đ )
-payments to auto part makers investment (mould and stamping machines etc)
-logistic costs
-SG&A of an OEM
Diving in a little further, Fordâs most recent Form 10-K shows that ~88% of their costs fall under âAutomotive cost of salesâ, which is accounting-speak for the costs of producing cars. Actually, that probably understates the case a bit, because Ford also has a small financial arm, and some of the remaining costs are attributable to that.
Of course, âAutomotive cost of salesâ encompasses the first three bullet points above, and the 10-K doesnât have enough information (at least upon a quick scan) to break down the costs further.
It would be interesting to know more about where Tesla has the opportunity to wring costs out of the system.
A Google self-driving car was pulled over, for driving too slow.
Mostly this is just funny. But it does raise questions about how driving incentives (as opposed to just skill) will differ when humans cede control to machinesâââparticularly machines programmed by other people.
Maybe I want to speed in order to get to a meeting, but Google doesnât particularly want me to do that. Who gets final say?
Think of this as Uber diversifying itâs risks on the margin of mapping. Now Uber partners with Alphabet/Google, Apple, and TomTom.
It also highlights the complicated relationships at the automobile-technology intersection, particularly when it comes to giant companies like Alphabet and Apple.
A few weeks ago, I published a Friends and Enemies matrix, laying out the landscape for autonomous vehicles.
In that Matrix, I marked Uber as friends with Apple and Alphabet/Google.
Maybe that isnât quite right.
I was thinking largely of Apple and Google as potential autonomous technology suppliers to Uberâs car network.
However, both Apple and Google touch Uber at a several different points, which complicates the relationships between the companies. Itâs certainly conceivable that Uber could have a positive relationship with one division of Apple or Google, and an acrimonious and competitive relationship with another division.
Autonomous Driving: All three companies are developing self-driving technology, but for different reasons. Uber is motivated to lower the costs and increase the scalability of its transportation network. Apple is looking to sell vehicles. Google would like to become the operating system of all vehicles.
Mapping: Uber utilizes mapping technology provided by Apple and Google, and now by TomTom, as well. There are also reports of Uber starting its own mapping effort.
Mobile OS: Uber relies exclusively on Appleâs iOS and Alphabetâs Android for Uber customers to hail rides. Ditto for Uber driver apps, which are also where the mapping comes in (at least for now).
There are probably a few other margins along which Google, and maybe Apple, touch Uber. I wouldnât be surprised if Uber uses Apple Macbooks to do work powered by Google Apps for Business. It makes for very complex relationships.
Also, and like my note about NVIDIA yesterday, itâs a little hard to figure out when to use âAlphabetâ and when to use âGoogleâ. Maybe that will clarify over time.
They bill it as âa supercomputer on a module thatâs the size of a credit cardâ.
NVIDIA is targeting the unit principally at autonomous vehicles, and also medical imaging, which presumably tackles a lot of similar computer vision issues.
The last few years have seen a deceleration in the mobile phone market, as phone manufacturers and app developers have had a harder time figuring out how to improve the smartphone.
I think we will see the converse in the autonomous vehicle market, and the Jetson TX1 is an example of that. In the robotics market, there is a lot more room for improvement, and a greater number of currently-binding technological constraints that can be relaxed.
As a side note, I always waffle on how to spell in NVIDIA, which can appear in the press as âNVIDIAâ, âNvidiaâ, ânVidiaâ, or ânVIDIAâ. Since NVIDIAâs own website seems to be leaning toward the âNVIDIAâ styling, Iâll go with that.
In Wired, Alex Davies compares the self-driving approaches of Google and Ford, and finds them philosophically similar.
Davies compares the two companiesâ approaches in light of the NHTSA definition of autonomous driving. The NHTSA definition is lengthy, but Wikipedia has a concise summary:
Level 0: The driver completely controls the vehicle at all times. Level 1: Individual vehicle controls are automated, such as electronic stability control or automatic braking. Level 2: At least two controls can be automated in unison, such as adaptive cruise control in combination with lane keeping. Level 3: The driver can fully cede control of all safety-critical functions in certain conditions. The car senses when conditions require the driver to retake control and provides a âsufficiently comfortable transition timeâ for the driver to do so. Level 4: The vehicle performs all safety-critical functions for the entire trip, with the driver not expected to control the vehicle at any time. As this vehicle would control all functions from start to stop, including all parking functions, it could include unoccupied cars.
According to Davies, Level 3 presents significant challenges not present at any other level. Those challenges relate to on-the-fly communication between the driver and the car. Plausibly enough, if the car reaches its limits and needs to pass control to the driver in an emergency, that can be pretty dicey.
Audi says its tests show it takes an average of 3 to 7 seconds, and as long as 10, for a driver to snap to attention and take control, even with flashing lights and verbal warnings.
A lot can happen in that timeâââa car traveling 60 mph covers 88 feet per secondâââand automakers have different ideas for solving this problem. Audi has an elegant, logical human machine interface. Volvo is creating its own HMI, and says it will accept full liability for its cars while in autonomous mode.
Googleâs opting out of this dilemma. So is Ford.
Perhaps the incrementalist approach is not a winner, after all.