A British automotive insurer has offered to reduce insurance premiums 5% for drivers who turn on Autopilot. The insurer, Direct Line, says it doesnāt yet actually know with certainty whether Autopilot makes cars safer.
Direct Line said it was too early to say whether the use of the autopilot system produced a safety record that justified lower premiums. It said it was charging less to encourage use of the system and aid research.
But I have to imagine Direct Line believes Autopilot will make cars safer, even if it doesnāt know that for sure. After all, theyāre not offering 5% off to customers who drive blindfolded, on the theory that they need more research on that topic.
Although Direct Line is a UK company, the financial angle of autonomous systems ties in closely tactics that the US government has used in the past. Famously, the federal government did not directly mandate a drinking age of 21, but rather tied federal highway funds to whether states raised their drinking age to 21.
I can imagine a future scenario in which the government doesnāt mandate the use of autonomous vehicles, but rather a combination of governmental and insurance incentives push drivers gently or not-so-gently toward taking their hands off the wheel.
Next month Iāll be checking off a common bucket list item by visiting Michigan in January. Most people go for the weather, but I in fact am going for the North American International Auto Show.
I tease, of course, but I truly am excited to be heading back to Motor City, and especially for Americaās largest auto show.
On Wednesday, January 17, Iāll be speaking on a panel at Automobili-D, the tech section of the show, and Iāll be in town with some Udacity colleagues through the weekend.
Drop me a note at david.silver@udacity.com and Iād love to say hello. Itās always amazing to head to the center of the automotive world. In many ways it reminds me of how cool it was to visit Silicon Valley when I was a software engineer in Virginia, living outside the center of the software world.
Weāll be holding at least one and maybe a few events for Udacity students, potential students, and partners, and Iāll be announcing those here as we nail them down.
Thatās gotta be a rough headline for Patriots fansĀ š
For years, autonomous vehicle development in the US has happened primarily in three locations: Detroit, Silicon Valley, and Pittsburgh.
Detroit because itās the center of the US automotive industry, Silicon Valley because itās the center of the US technology industry, and Pittsburgh becauseā¦why?
But further inspection shows Boston potentially has a more robust autonomous vehicle industry than Pittsburgh. Indeed.com shows essentially all Pittsburghās autonomous vehicle jobs coming from three companies: Aptiv, Argo, and Uber.
On the other hand, Bostonās autonomous vehicle jobs come from: Square Robot, Liberty Mutual, nuTonomy, Draper, MathWorks, Aurora, Optimus Ride, Lux Research, and the list goes on. Thatās a diversified and presumably robust jobs base. Plus, Aptiv just announced a new Boston-based autonomous technology center.
Computer vision is how we use cameras to see the road. Humans demonstrate the power of vision by handling a car with basically just two eyes and a brain. For a self-driving car, we can use camera images to find lane lines, or track other vehicles on the road.
Sensor fusion is how we integrate data from other sensors, like radar and lasersātogether with camera dataāto build a comprehensive understanding of the vehicleās environment. As good as cameras are, there are certain measurementsāāālike distance or velocityāāāat which other sensors excel, and other sensors can work better in adverse weather, too. By combining all of our sensor data, we get a richer understanding of the world.
Localization is how we figure out where we are in the world, which is the next step after we understand what the world looks like. We all have cellphones with GPS, so it might seem like we know where we are all the time already. But in fact, GPS is only accurate to within about 1ā2 meters. Think about how big 1ā2 meters is! If a car were wrong by 1ā2 meters, it could be off on the sidewalk hitting things. So we have much more sophisticated mathematical algorithms that help the vehicle localize itself to within 1ā2 centimeters.
Path planning is the next step, once we know what the world looks like, and where in it we are. In the path planning phase, we chart a trajectory through the world to get where we want to go. First, we predict what the other vehicles around us will do. Then we decide which maneuver we want to take in response to those vehicles. Finally, we build a trajectory, or path, to execute that maneuver safely and comfortably.
Control is the final step in the pipeline. Once we have the trajectory from our path planning block, the vehicle needs to turn the steering wheel and hit the throttle or the brake, in order to follow that trajectory. If youāve ever tried to execute a hard turn at a high speed, you know this can get tricky! Sometimes you have an idea of the path you want the car to follow, but actually getting the car to follow that path requires effort. Race car drivers are phenomenal at this, and computers are getting pretty good at it, too!
The video at the beginning of this post covers similar territory, and I hope between that, and what Iāve written here, you have a better sense of how Self-Driving Cars work.
It used to be there was only one place in the world where any civilian off the street could catch a self-driving car: Pittsburgh, with Uberās autonomous vehicles.
Tesla hinted at this before, but apparently its long-term plan is to build its own autonomous vehicle chips. They are taking āvertical integrationā to a whole new level.
(Interestingly, when I looked up vertical integration on Wikipedia just now, the opening paragraph of the article lists Ford as an example. The more things change, the more they stay the same.)
Elon Musk apparently announced this at an event for AI researchers in Long Beach last week, concurrent with NIPS 2017.
The event was live-tweeted by Stephen Merity, who is worth a read in his own right:
Reuters reporter Paul Lienert scored one of the first post-splinoff interviews with Kevin Clark, the CEO of Aptiv. Aptiv is a spinoff from Delphi, one of the worldās foremost Tier 1 automotive suppliers. The existing Delphi Technologies will retain the core business of automotive supply, whereas Aptiv will focus on autonomous technology.
In this vein, Delphiās recent acquisition of nuTonomy will live within the Aptiv spinoff.
The split will hopefully resolve some potential tension for Delphi, as its new autonomous business seemed to be increasingly moving toward competition with the customers of its core automotive supply business. By splitting the companies, the legacy Delphi Technologies business may retain its credibility as a supplier, without carrying a side division engaged in competition with key customers.
One of the key insights to come out of the Reuters interview is Kevin Clarkās statement that autonomous technology will drop by orders of magnitude over the next 7 or so years.
While current estimates for the cost of a self-driving hardware and software package range from $70,000 to $150,000, āthe cost of that autonomous driving stack by 2025 will come down to about $5,000 because of technology developments and (higher) volume,ā Clark said in an interview.
Delphi is one of the leaders in the development of automotive techology, all the more so with their acquisition of nuTonomy. And their history as a Tier 1 supplier gives them greater insight than most other companies into how costs and production will scale.
So this seems like a prediction to take seriously. And if it comes to pass, that will be a game-changer. At $5000 marginal cost, consumers really could own their own self-driving vehicles, without relying on ride-sharing companies.
Of course, there are a host of reasons why consumers still might not want to own cars in the futureāāāthe costs of mapping, geofences, cratering costs of shared transportation. But $5000 autonomy would make plausible a lot of scenarios that thus far have seemed unlikely.
nuTonomy announced a while ago that they would be testing self-driving cars in Boston, but then I kind of lost track of that, especially in the wake of the Delphi acquisition.
Over a two-week trial in November, a select group of volunteers tested out nuTonomyās self-driving cars in Boston. The participants hailed a ride using the companyās booking app. The trips they took looped around the Seaport District, starting at the companyās Drydock Ave. office and moving onto Summer Street into downtown Boston and back along Congress Street.
Sounds like everything went well and in fact WBUR reports that another Boston company, Optimus Ride, is also testing in Boston.
I used to joke that thereās a reason every self-driving car company is testing in California, Nevada, or Arizonaāāālots of sun and warmth.
But with Uber in Pittsburgh and these companies in Boston, weāre making small steps to all-weather support for self-driving cars.
The Trolley Problem is a favorite conundrum of armchair self-driving car ethicists.
In the original version of the problem, imagine a trolley were running down the rails and about to run over three people tied to the tracks. What if you could throw a switch that would send the trolley down a different track? But what if that track had one person tied down? Would you actually throw the switch to kill one person, even if it meant saving the other three people? Or would you let three people die through inaction?
The self-driving car version of this problem is simpler: what if a self-driving car has to choose between running over a pedestrian, or driving off a cliff and killing the passenger in the vehicle? Whose life is more valuable?
USA Todayās article, āSelf-driving cars will decide who dies in a crashā does a reasonable job tackling this issue in-depth, from multiple angles. But the editors didnāt do the article any favors with the headline. Itās not actually self-driving cars that will decide who dies, itās the humans that design them.
Hereās Sebastian Thrun, my boss and the former head of the Google Self-Driving Car Project, explaining why this isnāt a useful question:
Iāve heard another automotive executive call it āAn impossible problem. You canāt make that decision, so how can you expect a car to solve it?ā
To be honest, I think of it as an unhelpful problem because we donāt have enough data to know at any given point, with what amount of certainty is the car going to kill anybody. Fatal accidents in self-driving cars havenāt happened yet in any meaningful numbers, so the necessary data doesnāt exist to even work on the problem.
But, I think Iāve come to a conclusion, at least about the hypothetical ethical dilemma:
The car should minimize the number of people who die, by following utilitarian ethics.
This raises some questions about how to value the lives of children versus adults, but I assume some government statistician in the bowels of the Department of Labor has worked that out.
So why should self-driving cars be utilitarian? Because people want them to be.
From USA Today:
Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car āin which they and their family member would be sacrificed for the greater good.ā
Iāve seen this in a few places now. The general public thinks cars should be designed to minimize fatalities, even if that means sacrificing the passengers. But they donāt want to ride in a car that would sacrifice passengers.
If you believe, as I do, and as Sebastian does, that these scenarios are vanishingly small, then who cares? Give the public what they want. In the exceedingly unlikely scenario that a car has to make this choice, choose the lowest number of fatalities.
And if people donāt want to ride in those cars themselves, they can choose not to. They can drive themselves, but of course that is pretty dangerous, too.
The network and the paper in question were clearly designed for autonomous driving, which Apple has been working on, more or less in secret, for years.
The network in questionāāāVoxelNetāāāhas been trained to perform object detection on lidar point clouds. This isnāt a huge leap from object detection on images, which has been a topic of deep learning research for several years, but it is a new frontier in deep learning for autonomous vehicles. Kudos to Apple for publishing their results.
VoxelNet (by Apple), draws heavily on two previous efforts at applying deep learning to lidar point clouds, both by Baidu-affiliated researchers. Since the three papers kind of work as a trio, I did a quick scan of them together.
A team of Tsinghua and Baidu researchers developed Multi-View 3D (MV3D) networks, which combine lidar and camera images in a complex neural network pipeline.
In contrast to Liās solo work, which constructs voxels out of the lidar point cloud, MV3D simply takes two separate 2D views of the point cloud: one from the front and one from the top (birdsā eye). MV3D also uses the 2D camera image associated with each lidar scan.
That provides three separate 2D images (lidar front view, lidar top view, camera front view).
MV3D uses each view to create a bounding box in two-dimensions. Birds-eye view lidar created a bounding box parallel to the ground, whereas front-view lidar and camera view each create a 2D bounding box perpendicular to the ground. Combining these 2D bounding boxes creates a 3D bounding box to draw around the vehicle.
At the end of the network, MV3D employs something called ādeep fusionā to combine output from each of the three neural network pipelines (one associated with each view). Iāll be honestāāāI donāt really understand how ādeep fusionā works, so leave me a note in the comments if you can follow what theyāre doing.
The results are a classification of the object and a bounding box around it.
That brings us to VoxelNet, from Apple, which got so much press recently.
VoxelNet has three components, in order:
Feature Learning Network
Convolutional Middle Layers
Region Proposal Network
The Feature Learning Network seems to be the main ācontribution to knowledgeā, as the scholars say.
It seems that what this network does is start with a semi-random sample of points from within āinterestingā (my word, not theirs) voxels. This sample of points gets run through a fully-connected (not fully-convolutional) network. This network learns point-wise features which are relevant to the voxel from which the points came.
The network, in fact, uses these point-wise features to develop voxel-wise features that describe each of the āinterestingā voxels. Iām oversimplifying wildly, but think of this as learning features that describe each voxel and are relevant to classifying the part of the vehicle that is in that voxel. So a voxel might have features like āblackā, ārubberā, and ātreadsā, and so you could guess that the voxel captures part of a tire. Of course, the real features wonāt necessarily be intelligible by humans, but thatās the idea.
These voxel-wise features can then get pumped through the Convolutional Middle Layers and finally through the Region Proposal Network and, voila, out come bounding boxes and classifications.
One of the most impressive parts of this line of research is just how new it is. The two Baidu papers were both first published online a year ago, and only made it into conferences in the last six months. The Apple paper only just appeared online in the last couple of weeks.
Itās an exciting time to be building deep neural networks for autonomous vehicles.