Mercedes Drive Pilot

The Verge took Mercedes Drive Pilot on the road and it says it looks amazing, “better than Elon’s”.

Drive Pilot is to the steering wheel what adaptive cruise is to stop and go pedals. Like Tesla’s Autopilot, the Mercedes system allows the driver to hand over direct control of steering and speed, while still supervising the overall operation of the car. Think of the driver as a manager in charge of employees: they’re controlling overall direction, but not micromanaging each individual operation.

And:

The system adapts to how much steering force is used, which allows the driver to decide exactly how much input to give. Use a light touch and the steering assist does most of the work. Apply a firmer hand and the system seamlessly gives up control. With Tesla’s Autopilot, applying steering force results in a slightly alarming jerk of the wheel when the system disengages. Mercedes engineers told me they wanted anyone to be able to take control of the car without any difficulty, noting more than once that the driver was always in charge, no matter how much work the car was doing on their behalf.

This is particularly exciting for me because Mercedes-Benz has been a huge supporter of the Udacity Self-Driving Car Engineer Nanodegree Program. In fact, Mercedes-Benz engineers are personally designing and teaching large parts of our upcoming Sensor Fusion and Localization modules.

Sign up to join us if you want to learn from the best!

Global Artificial Intelligence Conference

On Friday, January 20th, I will be speaking at the Global Artificial Intelligence Conference in Santa Clara on “How to Become a Self-Driving Car Engineer”.

The talk will be similar to one I gave for the Bay Area AI Meetup, but with a new live-coding exercise!

Come out to say hello and learn how to become a self-driving car engineer with Udacity’s Self-Driving Car Engineer Nanodegree Program.

First News Out of CES

Our partner HARMAN has a new concept out for how to handle human-vehicle interaction in a Level 3 autonomous vehicle!

Full Windshield Heads-Up Display: Oasis utilizes the full windshield to project navigation prompts and other information to the driver, while also simultaneously projecting entertainment or information to the passenger.

Autonomous Drive Readiness Check — Handover to Manual: One of the most critical concerns of autonomous vehicles is how to ensure the transition between autonomous mode and manual mode is handled seamlessly. HARMAN’s solution combines haptic feedback, eye gaze tracking, and the driver’s cognitive load readiness through pupil monitoring, to ensure that the driver is truly engaged and able to safely take control of the steering wheel.

Augmented Reality Concierge: This solution addresses the need to support increased productivity in the car while minimizing distraction. A voice-controlled virtual assistant functions as a concierge, automatically suggesting and displaying personalized points of interest while enabling advanced in-vehicle productivity to join conference calls, update calendars and more. Through Skype connectivity, the system can even translate telephone conversations — in real time — with colleagues speaking different languages.

Predictive Collision Prevention: V2X (vehicle-to-vehicle and vehicle-to-infrastructure) technologies detect objects on a collision course and offer corrective action.

Intelligent E-Mirrors: Mirrors that are automatically activated/dimmed based on user gaze.

Image Augmentation

Data is the key to deep learning, and machine learning generally.

In fact, Stanford professor and machine learning guru (and Coursera founder, and Baidu Chief Scientist, and…) Andrew Ng says that it’s not the engineer with the best machine learning model that wins, rather it’s whoever has the most data.

One way to get a lot of data is to painstakingly collect a lot of it. All else equal, this is the best way to compile a huge machine learning dataset.

But all else is rarely equal, and compiling a big dataset is often prohibitively expensive.

Enter data augmentation.

The idea behind data augmentation (or image augmentation, when the data consists of images) is that an engineer can start with a relatively small data set, make lots of copies, and then perform interesting transformations on those copies. The end result will be a really large dataset.

One of the Udacity Self-Driving Car Engineer Nanodegree Program students, Vivek Yadav, has a terrific tutorial on how he used image augmentation to train his network for the Behavioral Cloning Project.

1. Augmentation:
 A. Brightness Augmentation
 B. Perspective Augmentation
 C. Horizontal and Vertical Augmentation
 D. Shadow Augmentation
 E. Flipping

2. Preprocessing

3. Sub-sampling

Read the whole thing!

Self-Driving Car Predictions for 2017

Self-Driving Car Predictions for 2017

On the first day of 2017, here’s what I think the year to come has in store for self-driving cars.

In the style of Scott Alexander, I attach confidence levels to my predictions.

  1. 1 Udacity Self-Driving Car Engineer Nanodegree Program student will start a new, permanent, full-time autonomous vehicle job: 99%
  2. Level 4 self-driving cars will be available for ridesharing on public roads somewhere in the world: 90%
  3. Level 4 self-driving cars will be available for ridesharing on public roads somewhere in the United States: 90%
  4. Ford will not push back its 2021 target launch for Level 4 vehicles: 90%
  5. 100 Udacity Self-Driving Car Engineer Nanodegree Program students will start new, permanent, full-time autonomous vehicle jobs: 90%
  6. No US highway will have a speed limit for autonomous vehicles that is faster than the speed limit for human-driven vehicles: 90%
  7. 1000 Udacity Self-Driving Car Engineer Nanodegree Program students will start new autonomous vehicle jobs (possibly contract roles or internships): 80%
  8. Level 4 self-driving cars will be available for ridesharing on public roads in Singapore: 80%
  9. A company will be acquired primarily for its autonomous vehicle capabilities with a valuation above $100 MM USD: 80%
  10. No company will sell a vehicle with autonomous technology that exceeds what Tesla offers: 80%
  11. At least 1 company that has not done so before will complete a continuous US coast-to-coast demonstration trip in a fully autonomous vehicle: 80%
  12. Level 4 self-driving cars will be available for ridesharing on public roads in California: 70%
  13. Conditional on Level 3 self-driving cars being legal somewhere, Tesla will enable Level 3 autonomy in that location: 70%
  14. Google will offer rides to the public in its self-driving cars: 70%
  15. No member of the general public will die in a Level 4 vehicle: 70%
  16. 2500 Udacity Self-Driving Car Engineer Nanodegree Program students start new autonomous vehicle jobs (possibly contract roles or internships): 60%
  17. Level 4 self-driving cars will be available for ridesharing on public roads somewhere in Europe: 60%
  18. Level 4 self-driving cars will not be available for ridesharing on public roads somewhere in China: 60%
  19. No company will be acquired primarily for its autonomous vehicle capabilities with a valuation above $500 MM USD: 60%
  20. A new autonomous vehicle startup will form and raise money at a valuation above $75 MM USD: 60%
  21. Ford will commit to launching Level 4 vehicles before 2021: 50%
  22. Level 4 self-driving cars will be available for ridesharing on public roads somewhere in the world, without a safety driver: 50%
  23. Level 3 self-driving cars will be available for private ownership somewhere in the United States: 50%
  24. Somebody will die in a Tesla Autopilot crash: 50%
  25. Level 4 autonomous vehicles will be available for public ridesharing when snow is on the ground: 50%

Note: Udacity has partnerships with several companies I mention here, and I used to work at Ford, but none of these predictions stem from non-public information.

Update: The SAE definitions are here: https://en.wikipedia.org/wiki/Autonomous_car#Classification

Basically, a Level 3 vehicle is fully self-driving, but the driver must be able to take control of the vehicle at a moment’s notice.

A Level 4 vehicle is fully self-driving in most situations, and the driver does not need to be ready to regain control of the vehicle.

Top Posts in 2016

This blog is a quantity-over-quality endeavor. I try to post every day, and hopefully a few things resonate over the course of a year.

Here are the posts that resonated most in 2016:

Term 1: In-Depth on Udacity’s Self-Driving Car Curriculum

We need to get the curriculum off my personal blog and onto our program homepage. But for now, this is where prospective students come to learn what’s in the program.

Tesla’s Autopilot Crash

A meditation on why the very first self-driving car fatality might have happened, and what it meant for the industry.

Self-Driving Car Employers

If you want to work on self-driving cars, here’s who’s hiring, and why.

Udacity Self-Driving Car Nanodegree

I’m building a self-driving car curriculum with Sebastian Thrun! And many other great people and companies.

How to Land an Autonomous Vehicle Job: Coursework

Here are the online courses I took that helped me land a job with Ford’s Autonomous Vehicle team. You, of course, should just enroll in the Udacity Self-Driving Car Engineer Nanodegree Program 🙂

2016 in Review

Looking through my Medium posts from 2016, I wish I had sat down on January 1, 2016, and written predictions of what I thought the year would bring for self-driving cars, so that I could check those predictions now.

I’ll write that kind of post for 2017 this afternoon.

For now, here’s my take on 2016.

January: GM Invests in Lyft

February: Google Causes the First Self-Driving Car Crash

March: GM Buys Cruise

April: Self-Driving Car Companies Lobby for Consistent Federal Regulation

May: Uber Launches Self-Driving Cars in Pittsburgh

June: BMW, Intel, and Mobileye Partner on Self-Driving Cars

July: Delphi Tests Self-Driving Cars in Singapore

August: nuTonomy Launches Self-Driving Cars in Singapore

September: Apple Pulls Back on Self-Driving Cars

October: Tesla Announces Self-Driving Car Plan

November: nuTonomy to Launch Self-Driving Cars in Boston

December: Uber Launches, Then Cancels Self-Driving Cars in San Francisco

I pulled these headlines from Google News. Next year, I hope to see lots more launches. I’d also love to see Udacity make this list in 2017!

Self-Driving Cars and Organ Donation

Slate says: Self-Driving Cars Will Make Organ Shortages Even Worse. This will happen in two ways.

One, about 20% of US organ donations come from car accident victims. Presumably self-driving cars will reduce the number of organs available.

Two, a common place to opt-in to organ donation is at the DMV, while obtaining or renewing a driver’s license. Presumably self-driving cars will reduce the number of people who get driver’s licenses, and thus reduce the number of people who opt-in to organ donation.

The rest of the article is mostly about ways to improve the organ donation system in the US, irrespective of self-driving cars.

But it is an interesting case study a second-order effect autonomous vehicles will have on our world.

New Vehicle-to-Vehicle Communication Rules

Urban planner and historian Sarah Jo Peterson emails me that the US Department of Transportation just proposed a rule requiring automakers to include vehicle-to-vehicle communication hardware in new cars, and to use a common standard.

Of course, this is just a proposal. Before this could ever take effect, a new presidential administration will be in place and they might have their own views.

Peterson notes some concerns:

Are we moving to a world where bicycles need V2V and pedestrians need V2V? What does it mean for an act of mobility to require continuous government permission? (If you are not broadcasting, are you illegal? Will you be shut down in real time?)

I agree and would prefer if V2V arose as a de facto standard, instead of a de jure standard mandated by the government. This might be tougher for vehicle-to-infrastructure communication, which necessarily involves communication with government property, like traffic lights.

But if SMTP could rise as a de facto standard, the cause does not seem lost.

Meanwhile, Peterson points me to a Transportist blog post by David Levinson, arguing that in some scenarios, vehicle-to-vehicle communication may even be harmful in some scenarios.

The full blog post is hard to excerpt, but Levinson emphasizes that if we come to rely on vehicle-to-vehicle communication to navigate intersections (for example), a bug in the system or an unexpected event (he suggests a deer crossing the road) could bring traffic to a halt and possibly cause massive collisions.

I’m a little less pessimistic on that front, but Levinson is a professor of transportation and has been working on this problem for a decade, so I might defer to his logic.

How Ford Builds Autonomous Vehicles

Chris Brewer, the chief engineer for Ford’s Autonomous Vehicle Program, has a great post on Medium outlining the major components of Ford’s self-driving car.

Pay attention to the part where he talks about compute platforms and power consumption. That was my team!

Well, to make fully autonomous SAE-defined level 4-capable vehicles, which do not need a driver to take control, the car must be able to perform what a human can perform behind the wheel. Our virtual driver system is designed to do just that. It is made up of:

Sensors — LiDAR, cameras and radar

Algorithms for localization and path planning

Computer vision and machine learning

Highly detailed 3D maps

Computational and electronics horsepower to make it all work

It comes with a nifty video!

https://www.youtube.com/watch?v=6QJeaK7U87o