
Tyler Cowen would file this under, “There is No Great Stagnation”.

Tyler Cowen would file this under, “There is No Great Stagnation”.

I think one of the biggest areas of society that self-driving cars will change, and one of the least-appreciated areas, will be policing.
Most of the interactions I have had with police are related to driving, and once the car is driving for me, those interactions will go away. Maybe they’ll be replaced by other police interactions, or maybe not. But it will be different.
The Marshall Project has a thinkpiece up about this topic today:
So what’s the big deal if police can no longer make traffic stops? It’s about half of what police do, says [criminology Professor Joseph] Schafer. He estimates such stops, along with traffic accidents, account for nearly 50 percent of all police-public encounters.
The end of traffic stops would have surprisingly large implications.
For example, traffic stops were a key part of the race riots in Ferguson, Missouri:
Another aspect of this situation might stem from a system that burdens the poor and black in Ferguson. Minor traffic offenses are the starting point, and the costs spiral up rapidly if the offenders do not pay the fines on time or do not appear in court. The income from court fines represented the second largest source of revenue for Ferguson in 2013. On October 1, 2014, the city of St. Louis cancelled 220,000 arrest warrants and gave a three-month delay to the offenders to get a new court date before the warrants would be reissued.
Ferguson might be an outlier, but traffic fines provide a huge part of the budget of many police departments or even cities.
Police may be sanguine about self-driving cars until it really becomes obvious how deeply self-driving cars can hit their funding structure.

Udacity just increased the first cohort of the Self-Driving Car Nanodegree program to 500 students!
We are so excited to have over 10,000 students apply to join the program, and we hope to teach all of them.
We’re limiting the initial cohort to 500 students to make sure we have everything ready to go to scale up the program over time, but the goal is to be able to teach everyone who wants to learn.
With that in mind, you should apply today!
Here is a tentative (subject to change) overview of the first term:
Introduction: You’ll learn about the program, the student support available, and, most importantly, the ways we’ll help you land a job in autonomous vehicles. Within hours of starting, you’ll be writing code to find lane lines on the road.
Deep Learning: You’ll learn about deep neural networks and deep learning frameworks. In the final project you’ll build a deep neural network for end-to-end driving of a vehicle in a simulator.
Computer Vision: You’ll learn about how computers and cameras work together to see the world. In the final project you’ll use OpenCV and deep learning to identify vehicles on the highway.
I am super-excited about this program and I hope you are, too. Please join us!

A van in Mountain View, California, ran a red light yesterday and t-boned a Google self-driving car, resulting in what news outlets are calling, “one of the worst accidents in the history of the Google self-driving car program”.
To be clear, Google’s self-driving car was the victim — the human driver of the other vehicle was 100% at-fault.
A website called 9to5google.com has the best reporting I’ve seen, including a statement from Google:
A Google vehicle was traveling northbound on Phyllis Ave. in Mountain View when a car heading westbound on El Camino Real ran a red light and collided with the right side of our vehicle. Our light was green for at least six seconds before our car entered the intersection. Thousands of crashes happen everyday on U.S. roads, and red-light running is the leading cause of urban crashes in the U.S. Human error plays a role in 94% of these crashes, which is why we’re developing fully self-driving technology to make our roads safer.
This is just another reminder that however sexy and interesting the Trolley problem might be, it’s irrelevant, at least in the short-term. The immediate, real problem is that we humans are terrible drivers.

A Business Insider article just reminded me of something I had forgotten: the first automobiles were simply horseless carriages.
The context is an interview with Ken Lawson, an urban planner at MIT. Lawson argues that self-driving taxis will be great, but they won’t last for that long.
Most trips in the city, he said, involve individuals moving around their own neighborhoods far below the maximum speeds of cars.
“Why have a 4,000-pound automobile that seats five to move one person a short distance at low speed?” he said.
And this:
“It’s just like, you had the horse-and-buggy,” he said. “You got rid of the horse — it still looked like a buggy.”
Lawson doesn’t reveal what he thinks the self-driving equivalent of the Model T will be, though.

For months now there have been periodic reports of the race to put self-driving taxis on the road in Singapore.
First nuTonomy, then Delphi, said they were imminent.
Today ReadWrite reports that nuTonomy is partnering with a ride-sharing company called Grab, which I take it is an Uber competitor in Asia.
I’ve never heard of Grab before, but perhaps they are big in Singapore?
I presume it was a big win for them to get the nuTonomy partnership, but I wish I knew more. Does this make them an Uber-level competitor now? Or is it all press and no substance?
I’ll keep an eye out, but let me help me out if you live in that part of the world and know more.

One of the first modules in our Self-Driving Car Nanodegree program will be Deep Learning. This is such a fun topic!
We’ll be covering behavioral cloning, which is a technique whereby you drive the car (or the simulated car, in this case) yourself and then pass the data to a neural network. The neural network trains on your driving data and auto-magically learns how to drive the car, without any other information. You don’t have to tell it about the color of the road or which way to turn or where the horizon is. You just pass in data of your own driving and it learns.
By the end, students will be building their own neural networks to drive cars, just like in this video.

The United States Department of Transportation has issued guidelines about self-driving vehicles.
The tone seems to be largely positive:
For DOT, the excitement around highly automated vehicles (HAVs) starts with safety. Two numbers exemplify the need. First, 35,092 people died on U.S. roadways in 2015 alone. Second, 94 percent of crashes can be tied to a human choice or error.2 An important promise of HAVs is to address and mitigate that overwhelming majority of crashes.
A goal that DOT highlights is to prevent a patchwork of regulations across state lines. Of course, one person’s “patchwork of regulations” is another person’s “laboratories of democracy”.
The full report is over 100 pages and I confess I’ve only read the Executive Summary. But so far, so good.

Lyft co-founder John Zimmer has written a long thinkpiece on Medium, outlining the future of Lyft, transportation, America, and the world.
The headlines coming out of the piece are that most of Lyft’s rides will be driverless by 2025, and that private car ownership will be dead in urban America by that time.
But it’s really a magnum opus on transportation and technology.
Helpfully, Zimmer divides the piece into sections.
1. Autonomous vehicle fleets will quickly become widespread and will account for the majority of Lyft rides within 5 years.
2. By 2025, private car ownership will all-but end in major U.S. cities.
3. As a result, cities’ physical environment will change more than we’ve ever experienced in our lifetimes.

The Detroit Free Press has a long article on why self-driving cars will be mostly electric, instead of gas powered.
“There are a lot fewer moving pieces in an electric vehicle. There are three main components — the battery, the inverter and the electric motor,” said Levi Tillemann-Dick, managing partner at Valence Strategic in Washington, D.C., and author of “The Great Race: The Global Quest for the Car of the Future.” “An internal combustion engine contains 2,000 tiny pieces that have to be kept lubricated and they break every once in a while.”
That’s probably true (although I’m not a powertrain expert), and it might account for why big OEMs are going electric vs. gas for their self-driving car fleets.
But there’s a more pressing and practical reason for self-driving car engineers working on today’s vehicles: it’s not possible to make most purely gas-powered vehicles self-driving.
It all comes down to braking. Hitting the brakes is still a purely mechanical function in almost all cars, for safety reasons. The logic is that computers are less reliable than mechanical parts, and you don’t want anything to jeopardize the functioning of the brakes.
However, that means that it’s not possible to retrofit the car drive itself, because there’s no computerized way to tell the car to brake.
The solution is to use electric and hybrid vehicles, which have electronic brake controls build into the hybrid powertrain. Combine that with some sort of brake control from parking assistance, and that’s enough to control to make a self-driving car stop.