Self-Driving Cars and Organ Donation

Slate says: Self-Driving Cars Will Make Organ Shortages Even Worse. This will happen in two ways.

One, about 20% of US organ donations come from car accident victims. Presumably self-driving cars will reduce the number of organs available.

Two, a common place to opt-in to organ donation is at the DMV, while obtaining or renewing a driver’s license. Presumably self-driving cars will reduce the number of people who get driver’s licenses, and thus reduce the number of people who opt-in to organ donation.

The rest of the article is mostly about ways to improve the organ donation system in the US, irrespective of self-driving cars.

But it is an interesting case study a second-order effect autonomous vehicles will have on our world.

New Vehicle-to-Vehicle Communication Rules

Urban planner and historian Sarah Jo Peterson emails me that the US Department of Transportation just proposed a rule requiring automakers to include vehicle-to-vehicle communication hardware in new cars, and to use a common standard.

Of course, this is just a proposal. Before this could ever take effect, a new presidential administration will be in place and they might have their own views.

Peterson notes some concerns:

Are we moving to a world where bicycles need V2V and pedestrians need V2V? What does it mean for an act of mobility to require continuous government permission? (If you are not broadcasting, are you illegal? Will you be shut down in real time?)

I agree and would prefer if V2V arose as a de facto standard, instead of a de jure standard mandated by the government. This might be tougher for vehicle-to-infrastructure communication, which necessarily involves communication with government property, like traffic lights.

But if SMTP could rise as a de facto standard, the cause does not seem lost.

Meanwhile, Peterson points me to a Transportist blog post by David Levinson, arguing that in some scenarios, vehicle-to-vehicle communication may even be harmful in some scenarios.

The full blog post is hard to excerpt, but Levinson emphasizes that if we come to rely on vehicle-to-vehicle communication to navigate intersections (for example), a bug in the system or an unexpected event (he suggests a deer crossing the road) could bring traffic to a halt and possibly cause massive collisions.

I’m a little less pessimistic on that front, but Levinson is a professor of transportation and has been working on this problem for a decade, so I might defer to his logic.

How Ford Builds Autonomous Vehicles

Chris Brewer, the chief engineer for Ford’s Autonomous Vehicle Program, has a great post on Medium outlining the major components of Ford’s self-driving car.

Pay attention to the part where he talks about compute platforms and power consumption. That was my team!

Well, to make fully autonomous SAE-defined level 4-capable vehicles, which do not need a driver to take control, the car must be able to perform what a human can perform behind the wheel. Our virtual driver system is designed to do just that. It is made up of:

Sensors — LiDAR, cameras and radar

Algorithms for localization and path planning

Computer vision and machine learning

Highly detailed 3D maps

Computational and electronics horsepower to make it all work

It comes with a nifty video!

https://www.youtube.com/watch?v=6QJeaK7U87o

How to Become a Self-Driving Car Engineeer Talk

In November I gave a talk the Bay Area AI Meetup entitled, “How to Become a Self-Driving Car Engineer”. A fair bit of the talk was an overview of the Udacity Self-Driving Car Engineer Nanodegree Program. But we also touched on a variety of other topics related to autonomous vehicles, particularly during the question and answer session.

The slides for the talk:

The Lane-Finding demo:

The talk itself:

An interview I recorded after the talk with Alexy Khrabrov, the founder of Bay Area AI:

Thanks to the Bay Area AI team for having me!

Eight Days of Autonomous Vehicles

December 14, 2016:

Uber has expanded its self-driving taxi trial to the home of technology and autonomous vehicles; San Francisco. Starting from 14 December, Uber customers with a credit card attached to a San Francisco billing address are eligible to ride in a fleet of five self-driving cars.

December 22, 2016:

“Our cars departed for Arizona this morning by truck,” said an Uber spokesperson in an email to The Verge. “We’ll be expanding our self-driving pilot there in the next few weeks, and we’re excited to have the support of Governor Ducey.”

The move comes after California’s Department of Motor Vehicles revoked the registration of Uber’s 16 self-driving cars because the company refused to apply for the appropriate permits for testing autonomous cars.

This does not feel like progress.

Startup Watch: Blackmore

A startup called Blackmore just raised a few million dollars to miniaturize sensors for autonomous vehicles.

A few interesting points about Blackmore:

  1. They want to embed lidar in the grill of a car. This seems like a difficult vantage point, since the sensor won’t have a 360-degree view of the environment.
  2. They plan to deliver prototypes next summer.
  3. Based on their website, they seem to target two markets: autonomous vehicles and the military.
  4. They’re based in Bozeman, Montana, which is a great town, but hardly a tech hub. Given the cost of housing in Silicon Valley, though, I’m tempted to apply for a job there right now.

Autonomous World

Business Insider recently launched a special series called “Autonomous World” that covers self-driving cars. It’s thorough!

Articles (I have not read all of them yet) include:

Is Deep Learning Overhyped?

One of the questions I get every now and again is whether self-driving cars are a solved problem. Is there any work left to be done in this field?

The answer is that there is so much work left to be done! It only seems like a solved problem from the outside 🙂

So I was interested to read Francois Chollet’s answer to “Is Deep Learning Overhyped?” on Quora.

Chollet is the author of Keras, which is a deep learning library we use in the Udacity Self-Driving Car Program. He explains at length why artificial intelligence generally, much like autonomous driving specifically, is not a solved problem.

Overall: deep learning has made us really good at turning large datasets of perceptual inputs (images, sounds, videos) and simple human-annotated targets (e.g. the list of objects present in a picture) into models that can automatically map the inputs to the targets. That’s great, and it has a ton a transformative practical applications. But it’s still the only thing we can do really well. Let’s not mistake this fairly narrow success in supervised learning for having “solved” machine perception, or machine intelligence in general. The things about intelligence that we don’t understand still massively outnumber the things that we do understand, and while we are standing one step closer to general AI than we did ten years ago, it’s only by a small increment.

There’s still a lot of work left to do!

Auro’s Santa Clara Shuttle

Udacity’s partner, Auro Robotics, has been testing it’s self-driving shuttle on the campus of Santa Clara University for a year. In November they turned it loose on the public for the first time.

IEEE Spectrum says it’s getting a good reception!

During my rides, it was clear that the students are used to the Aero — so used to it that they don’t even think about getting out of its way. That can lead to a somewhat frustrating ride as the vehicle patiently trails a slow-walking student; it has a horn, but is too polite to beep. Visitors to campus, however, are at first puzzled, then thrilled, to learn that they are being chauffeured in a car that is driving itself. (See video, above.) And if you’re in the area, and have never had a ride in an autonomous vehicle, just stand in front of the parking garage for a while — the shuttle won’t ask to see your ID.