Wired alerts the world that self-driving cars have a long way to go:
OK, so you wonât get a fully autonomous car in your driveway anytime soon. Hereâs what you can expect, in the next decade or so: Self-driving cars probably wonât operate where you live, unless youâre the denizen of a very particular neighborhood in a big city like San Francisco, New York, or Phoenix. These cars will stick to specific, meticulously mapped areas. If, by luck, you stumble on an autonomous taxi, it will probably force you to meet it somewhere it can safely and legally pull over, instead of working to track you down and assuming hazard lights grant it immunity wherever it stops. You might share that ride with another person or three, Ă la UberPool.
Color me unconvinced. The Hype Cycle is conceptually interesting, but has been subject to numerous criticismsâââmost comically, that it is not actually a cycle, and most importantly, that itâs not really accurate.
Maybe self-driving cars will go through a trough of disillusionment, but that hardly seems guaranteed.
My guess, in the next decade a lot of cities will start to get self-driving cars. They probably will stick to specific, meticulously mapped areas, but those geofences will expand over time, as fleets of vehicles share sensor data.
Both were on the cusp of huge breakthroughs, but not quite there yet. And a lot of the players are the same.
There are certainly lots of important companies in both ecosystems that are not on this list, but these are just the ones that popped into my head at the moment.
Some of the pure automotive manufacturers and suppliers were mostly absent from the mobile phone wars. Conversely, some of the pure telecommunications companies donât have much of a presence in the self-driving car race. But in between, a striking number of players remain the same.
According to Matthew DeBord from Business Insider, itâs Cadillac Super Cruise, which he rates as better than Tesla Autopilot or Mercedes-Benz Drive Pilot.
Super Cruise was superb, in my limited time with the tech, and when it was willing to operate. Itâs a hyper-conservative approach to Level 2 autonomyâââthe level at which the driver must monitor the system, but can consider taking his or her hands off the wheel while being prepared to resume control when prompted.
The 8th lesson of the Udacity Self-Driving Car Engineer Nanodegree program is âConvolutional Neural Networks.â This is where students learn to apply deep learning to camera images!
Convolutional neural networks (CNNs) are a special category of deep neural networks that are specifically designed to work with images. CNNs have multiple layers, with each layer connected to the next by âconvolutions.â
In practice, what this means is that we slide a patch-like âfilterâ over the input layer, and the filter applies weights to each artificial neuron in the input layer. The filter connects to a single artificial neuron in the output layer, thereby connecting each neuron in the output layer to a small set of neurons from the input layer.
To make this more concrete, consider this photograph of a dog:
When we run this photograph through a CNN, weâll slide a filter over the image:
This filter will, broadly speaking, identify basic âfeatures.â It might identify one frame as a curve, and another as a hole:
CurveHole
The next layer in the CNN would pass a different filter over a stack of these basic features, and identify more sophisticated features, like a nose:
Nose
The final layer of the CNN is responsible for classifying these increasingly sophisticated features as a dog.
This is of course simplified for the sake of explanation, but hopefully it helps to make the process clear.
One of the more vexing aspects of deep learning is that the actual âfeaturesâ that a network identifies are not necessarily anything humans would think of as a âcurveâ or a ânose.â The network learns whatever it needs to learn in order to identify the dog most effectively, but that may not be anything humans can really describe well. Nonetheless, this description gets at the broad scope of how a CNN works.
Once students learn about CNNs generally, itâs time to practice building and training them with TensorFlow. As Udacity founder Sebastian Thrun says, âYou donât lose weight by watching other people exercise.â You have to write the code yourself!
The back half of the lesson covers some deep learning topics applicable to CNNs, like dropout and regularization.
The lesson ends with a lab in which students build and train LeNet, the famous network by Yann LeCun, to identify characters. This is a classic exercise for learning convolutional neural networks, and great way to learn the fundamentals.
Ready to start learning how to build self-driving cars yourself? Great! If you have some experience already, you can apply to our Self-Driving Car Engineer Nanodegree program here, and if youâre just getting started, then we encourage you to enroll in our Intro to Self-Driving Cars Nanodegree program here!
~
Thanks to my former colleague, Dhruv Parthasarathy, who built out this intuitive explanation in even greater detail as part of this lesson!
A British automotive insurer has offered to reduce insurance premiums 5% for drivers who turn on Autopilot. The insurer, Direct Line, says it doesnât yet actually know with certainty whether Autopilot makes cars safer.
Direct Line said it was too early to say whether the use of the autopilot system produced a safety record that justified lower premiums. It said it was charging less to encourage use of the system and aid research.
But I have to imagine Direct Line believes Autopilot will make cars safer, even if it doesnât know that for sure. After all, theyâre not offering 5% off to customers who drive blindfolded, on the theory that they need more research on that topic.
Although Direct Line is a UK company, the financial angle of autonomous systems ties in closely tactics that the US government has used in the past. Famously, the federal government did not directly mandate a drinking age of 21, but rather tied federal highway funds to whether states raised their drinking age to 21.
I can imagine a future scenario in which the government doesnât mandate the use of autonomous vehicles, but rather a combination of governmental and insurance incentives push drivers gently or not-so-gently toward taking their hands off the wheel.
Next month Iâll be checking off a common bucket list item by visiting Michigan in January. Most people go for the weather, but I in fact am going for the North American International Auto Show.
I tease, of course, but I truly am excited to be heading back to Motor City, and especially for Americaâs largest auto show.
On Wednesday, January 17, Iâll be speaking on a panel at Automobili-D, the tech section of the show, and Iâll be in town with some Udacity colleagues through the weekend.
Drop me a note at david.silver@udacity.com and Iâd love to say hello. Itâs always amazing to head to the center of the automotive world. In many ways it reminds me of how cool it was to visit Silicon Valley when I was a software engineer in Virginia, living outside the center of the software world.
Weâll be holding at least one and maybe a few events for Udacity students, potential students, and partners, and Iâll be announcing those here as we nail them down.
Thatâs gotta be a rough headline for Patriots fans đ
For years, autonomous vehicle development in the US has happened primarily in three locations: Detroit, Silicon Valley, and Pittsburgh.
Detroit because itâs the center of the US automotive industry, Silicon Valley because itâs the center of the US technology industry, and Pittsburgh becauseâŚwhy?
But further inspection shows Boston potentially has a more robust autonomous vehicle industry than Pittsburgh. Indeed.com shows essentially all Pittsburghâs autonomous vehicle jobs coming from three companies: Aptiv, Argo, and Uber.
On the other hand, Bostonâs autonomous vehicle jobs come from: Square Robot, Liberty Mutual, nuTonomy, Draper, MathWorks, Aurora, Optimus Ride, Lux Research, and the list goes on. Thatâs a diversified and presumably robust jobs base. Plus, Aptiv just announced a new Boston-based autonomous technology center.
Computer vision is how we use cameras to see the road. Humans demonstrate the power of vision by handling a car with basically just two eyes and a brain. For a self-driving car, we can use camera images to find lane lines, or track other vehicles on the road.
Sensor fusion is how we integrate data from other sensors, like radar and lasersâtogether with camera dataâto build a comprehensive understanding of the vehicleâs environment. As good as cameras are, there are certain measurementsâââlike distance or velocityâââat which other sensors excel, and other sensors can work better in adverse weather, too. By combining all of our sensor data, we get a richer understanding of the world.
Localization is how we figure out where we are in the world, which is the next step after we understand what the world looks like. We all have cellphones with GPS, so it might seem like we know where we are all the time already. But in fact, GPS is only accurate to within about 1â2 meters. Think about how big 1â2 meters is! If a car were wrong by 1â2 meters, it could be off on the sidewalk hitting things. So we have much more sophisticated mathematical algorithms that help the vehicle localize itself to within 1â2 centimeters.
Path planning is the next step, once we know what the world looks like, and where in it we are. In the path planning phase, we chart a trajectory through the world to get where we want to go. First, we predict what the other vehicles around us will do. Then we decide which maneuver we want to take in response to those vehicles. Finally, we build a trajectory, or path, to execute that maneuver safely and comfortably.
Control is the final step in the pipeline. Once we have the trajectory from our path planning block, the vehicle needs to turn the steering wheel and hit the throttle or the brake, in order to follow that trajectory. If youâve ever tried to execute a hard turn at a high speed, you know this can get tricky! Sometimes you have an idea of the path you want the car to follow, but actually getting the car to follow that path requires effort. Race car drivers are phenomenal at this, and computers are getting pretty good at it, too!
The video at the beginning of this post covers similar territory, and I hope between that, and what Iâve written here, you have a better sense of how Self-Driving Cars work.
It used to be there was only one place in the world where any civilian off the street could catch a self-driving car: Pittsburgh, with Uberâs autonomous vehicles.
Tesla hinted at this before, but apparently its long-term plan is to build its own autonomous vehicle chips. They are taking âvertical integrationâ to a whole new level.
(Interestingly, when I looked up vertical integration on Wikipedia just now, the opening paragraph of the article lists Ford as an example. The more things change, the more they stay the same.)
Elon Musk apparently announced this at an event for AI researchers in Long Beach last week, concurrent with NIPS 2017.
The event was live-tweeted by Stephen Merity, who is worth a read in his own right: