Intel Studying Human —  Self-Driving Car Interaction

Intel dove into self-driving cars in a big way with their Mobileye acquisition earlier this year. But these big acquisitions take a while to close and even longer to integrate, so in the meantime it’s great to see that Intel is moving forward with autonomous vehicle research at its Chandler, Arizona, test facility.

In particular, Intel reports on a qualitative human-machine interaction study it did on seven “tension points”:

  • Human vs. machine judgment
  • Personalized space vs. lack of assistance
  • Awareness vs. too much information
  • Giving up control of the vehicle vs. gaining new control of the vehicle
  • How it works vs. proof it works
  • Tell me vs. listen to me

Here’s the video:

Headlines from Google, Uber, and Waymo

Two big exclusive scoops and a smaller headline in the autonomous vehicle world today.

Apple Scales Back Its Ambitions for a Self-Driving Car

The New York Times got five sources at the notoriously secretive Apple self-driving car effort (Project Titan) to open up about the successes and failures of the project. It sounds like Apple has gone through similar debates as most other self-driving car efforts (build Level 3 features or jump straight to Level 4? have a steering wheel or not? focus on retrofitting existing vehicles or build a new vehicle from the ground up?).

Things seemed to go sideways for a while, but apparently the project is back on a growth trajectory. It will be exciting to see what Apple eventually launches.

“The car project ran into trouble, said the five people familiar with it, dogged by its size and by the lack of a clearly defined vision of what Apple wanted in a vehicle. Team members complained of shifting priorities and arbitrary or unrealistic deadlines.”

Uber’s self-driving cars hit Toronto streets — in manual mode

Uber has self-driving cars on the streets of Toronto now, although they’re being driven by humans in “mapping mode” for the moment. If Uber does pull the trigger on self-driving mode — which it expects to do later this year — that will give it test vehicles in Pittsburgh, Phoenix, San Francisco, and Toronto, which might be a wider geographic spread than even Waymo.

“The cars aren’t available for rides: they will be conducting mapping tasks. Uber says it hopes to test the cars in autonomous mode by the end of 2017.”

Inside Waymo’s Secret World for Training Self-Driving Cars

The Atlantic scored a big scoop that might justly be titled, “Inside Waymo’s Secret Worlds” [plural].

The first world is Waymo’s physical testing facility at the old Castle Air Force Base, in California’s central valley. The article talks about a city with streets but no buildings, designed specifically for testing self-driving cars. When Waymo runs into a particularly sticky driving situation, they just pave a version of the streets on their test facility and run their cars through that scenario over and over and over again.

“We pull up to a large, two-lane roundabout. In the center, there is a circle of white fencing. “This roundabout was specifically installed after we experienced a multilane roundabout in Austin, Texas,” Villegas says. “We initially had a single-lane roundabout and were like, ‘Oh, we’ve got it. We’ve got it covered.’ And then we encountered a multi-lane and were like, ‘Horse of a different color! Thanks, Texas.’ So, we installed this bad boy.””

The second world is Waymo’s internal simulation engine, named Carcraft. What started as a playback tool for sensor data has morphed into a simulation engine that allows Waymo to “drive” billions of miles per year.

“Once they have the basic structure of a scenario, they can test all the important variations it contains. So, imagine, for a four-way stop, you might want to test the arrival times of the various cars and pedestrians and bicyclists, how long they stop for, how fast they are moving, and whatever else. They simply put in reasonable ranges for those values and then the software creates and runs all the combinations of those scenarios.”

Microsoft and Flight Automation

Microsoft is doing a lot of advance work on flight automation, which might be the next big thing.

Co.Design reports on the work Microsoft researchers are doing both in the field and in simulation:

“Software simulators, with realistic physics just like a video game, offer one appealing alternative to real-world data when it comes to training AI. So before Microsoft put its glider in the real-life sky, it trained it to fly by watching hawks inside a simulator. The team built an open-source software called AirSim for its flight experiments, and over countless trials, various algorithms Microsoft developed learned how to fly like a hawk.”

This seems like a smart move by Microsoft, which largely missed the self-driving car goldrush. Instead of being a late entrant into that field, it’s getting a head start in an even more advanced field.

Microsoft’s Seattle location also works better with flight than it does with the automotive industry. Boeing’s Everett, Washington, aircraft factory is the largest in the world, and presumably a large network of suppliers and talent has grown up around that.

Microsoft also has roots in the flight world, with it’s series of Flight Simulator commercial products, and now its open-source AirSim research tool.

The Udacity Self-Driving Car Team

Over the entire nine month course of the Udacity Self-Driving Car Engineer Nanodegree, only a fraction of the people behind the program ever appear on camera.

There’s myself, of course, and my colleague Ryan Keenan, who taught a number of lessons. A few of my colleagues like Sebastian and Andrew Paster and Andy Brown and Aaron Brown (not related) appear for short cameos.

But there is a small army of colleagues behind the scenes who make everything work. The photo collage above doesn’t even capture everybody.

Here are a few photos I captured recently of the people who make the program happen.

Ryan Keenan (content developer), Justine Lai (producer), and Sebastian Thrun (president) at our final shoot.
Stephen Welch (services lead, then content developer), Brok Bucholtz (content developer), Aaron Brown (content developer), Justine, and me on a foggy day on our retreat at Point Reyes.
Geoff Norman, Justine Lai, Ernesto Molero, Larry Madrigal, and Silver, all working together to produce our final shoot.
Trophies for Justine, me, Caleb Kirksey (self-driving car engineer), and Megan Powell (support representative).
Stephen, Caleb, Aaron Brown, Anthony Navarro (product lead), and Brok at a team dinner.
Jessica Lulovics (program manager), me, Lisbeth Ortega (community manager), Megan, and Justine at a team dinner.
Stephen, Jessica, Caleb, me, Anthony, and Aaron celebrating the launch of our final module, with a cake that Jessica baked.

GM and Lyft and Partnerships

GM and Lyft seem to be heading toward a reckoning, similar to what Google and Uber are experiencing. Minus the allegations of intellectual property theft, at least so far.

Reuters has an article (written by Paul Lienert, a reader of this blog) highlighting the tension between GM’s growing presence in the ridesharing space, on the one hand, and on the other hand GM’s partial ownership, of and partnerships with, Lyft.

On the one hand, GM has invested heavily in Lyft, and holds a 9% ownership stake. GM also benefits from Lyft Express Drive, a Lyft program that leases GM vehicles to Lyft drivers.

On the other hand, GM is launching and expanding a number of programs that are competitive to Lyft.

“Maven can provide GM vehicles directly to ride-sharing drivers who previously leased them through Lyft Express Drive and Uber Vehicle Solutions.”

Similarly, GM’s Cruise subsidiary is beta testing a service called Cruise Anywhere that seems poised to use self-driving cars compete directly with Lyft’s core on-demand transportation service.

Partnerships are tricky, especially because companies’ interests and plans can diverge over time. Scott McNealy famously tweeted:

Ronald Coase won a Nobel Prize in part for theorizing about how ownership affects outcomes. Right now we’re seeing lots of self-driving car companies form partnerships, but I suspect in the future we’ll see many more outright acquisitions. Owning a company, instead of partnering with it, and can help align everyone’s interests.

Clemson University International Center for Automotive Research

I am, of course, very proud of the Self-Driving Car Engineer Nanodegree Program we have built at Udacity, which teaches software engineers to become autonomous vehicle engineers. You should enroll!

But there are other educational institutions out there, as well, and one I keep bumping into is the Clemson University International Center for Automotive Research.

CU-ICAR, as they style themselves, is a graduate school about 40 minutes up the road from the main Clemson campus, and it offers master’s and doctoral degrees in automotive engineering across a number of different specialties.

The 250 acre campus in Greenville, South Carolina, is located nearby BMW’s US manufacturing center in Spartanburg, SC, and is a great example of the type of industry-educational partnerships we engage in at Udacity.

I know very little about the Clemson program directly, and I’ve never been to Greenville, but I keep running into their graduates on autonomous vehicle teams at some of our largest hiring partners, so I thought I’d mention them.

I’ve also run into a few Clemson students who are taking the Self-Driving Car Nanodegree Program, so of course that makes me happy 🙂

Which Udacity Nanodegree Program Is Right For You?

Are you trying to decide which Udacity Nanodegree Program you should enroll in? Here’s an all-in-one guide to help you determine which program is best for you.


Android Basics

Partner: Google
Lead Instructors: Katherine Kuan, Chris Lei
Difficulty: Beginner
Time: 6 months
Syllabus: User Interface + User Input + Multiscreen Apps + Networking + Data Storage
Prerequisites: None!
Cost: $199 / month
Best For: Aspiring Android Developers with no programming experience.


Android Developer

Partner: Google
Lead Instructor: James Williams, Reto Meier
Difficulty: Intermediate
Time: 8 months
Syllabus: Developing Android Apps + Advanced Android App Development + Gradle for Android and Java + Material Design for Android Developers + Capstone Project
Prerequisites: Java, git, GitHub
Cost: $999 upfront OR $199/month
Best For: Intermediate programmers who want to become Android specialists.


Artificial Intelligence

Partners: IBM Watson, Amazon Alexa, DiDi Chuxing, Affectiva
Lead Instructor: Sebastian Thrun, Peter Norvig
Difficulty: Advanced
Time: 6 months
Syllabus: Foundations of AI + Deep Learning and Applications + Computer Vision + Natural Language Processing + Voice User Interfaces
Prerequisites: Python, basic linear algebra, calculus, and probability
Cost: $1600
Best For: Engineers who want to apply AI tools across an array of domains, from computer vision to natural language processing to voice interfaces.


Become an iOS Developer

Partners: AT&T, Lyft, Google
Difficulty: Intermediate
Time: 6 months
Syllabus: UIKit Fundamentals + iOS Networking with Swift + iOS Persistence and Core Data + How to Make an iOS App
Prerequisites: macOS 10.12 or OS X 10.11.5
Cost: $199 / month
Best For:
Beginners who want to launch their iOS developer careers.


Business Analyst

Partners: Alteryx, Tableau
Lead Instructor: Patrick Nussbaumer
Difficulty: Intermediate
Time: 160 hours
Syllabus: Problem Solving with Advanced Analytics + Creating an Analytical Dataset + Segmentation and Clustering + Data Visualization in Tableau + Classification Models + A/B Testing for Business Analysts + Time Series Forecasting
Prerequisites: Basic statistics and spreadsheet skills, a Windows computer
Cost: $199 / month
Best For:
Aspiring data analysts who want to launch a career in data-driven decision-making and visualization, as opposed to programming.


Data Analyst

Partners: Facebook, Tableau
Lead Instructor: Caroline Buckey
Difficulty: Intermediate
Time: 260 hours
Syllabus: Descriptive Statistics + Intro to Data Analysis + Git and GitHub + Data Wrangling + MongoDB + Exploratory Data Analysis + Inferential Statistics + Intro to Machine Learning + Data Visualization in Tableau + Introduction to Python Programming
Prerequisites: None!
Cost: $199 / month
Best For:
Aspiring data scientists who want to launch a career in developing software to extract meaning from data.


Deep Learning Foundations

https://vimeo.com/199252593

Lead Instructors: Ian Goodfellow, Andrew Trask, Mat Leonard
Difficulty: Intermediate
Time: 6 months
Syllabus: Introduction + Neural Networks + Convolutional Neural Networks + Recurrent Neural Networks + Generative Adversarial Networks
Prerequisites: Python, basic linear algebra and calculus
Best For:
Students excited by the potential for deep learning to change the world, and who additionally wish to earn guaranteed entry into Udacity’s Artificial Intelligence, Robotics, or Self-Driving Car Engineer Nanodegree Programs (a special “perk” of the program for graduates!).


Digital Marketing

Partners: Facebook, Google, Hootsuite, HubSpot, MailChimp, Moz
Lead Instructor: Anke Audenaert
Time: 3 months
Syllabus: Marketing Fundamentals + Content Strategy + Social Media Marketing + Social Media Advertising through Facebook + Search Engine Optimization (SEO) + Search Engine Marketing with AdWords + Display Advertising + Email Marketing + Measure and Optimize with Google Analytics
Prerequisites: None!
Best For:
Hard workers seeking to launch or advance their digital marketing careers through real-world experience and multi-platform fluency.


Front-End Web Developer

Partners: AT&T, Google, GitHub, HackReactor
Lead Instructors: Mike Wales, Cameron Pittman
Difficulty: Intermediate
Time: 6 months
Syllabus: Intro to HTML and CSS + Responsive Web Design Fundamentals + Responsive Images + JavaScript Basics + Intro to jQuery + Object-Oriented JavaScript + HTML5 Canvas + Browser Rendering Optimization + Website Performance Optimization + Intro to AJAX + JavaScript Design Patterns + JavaScript Testing
Prerequisites: Basic computer programming
Cost: $199 / month
Best For:
New web developers who want to build a portfolio and get a job!


Full Stack Web Developer

Partners: Amazon Web Services, GitHub, AT&T, Google
Lead Instructors: Mike Wales, Karl Krueger
Difficulty: Intermediate
Time: 6 months
Syllabus: Programming Foundations with Python + Responsive Web Design Fundamentals + Intro to HTML and CSS + Responsive Images + Intro to Relational Databases + Authentication & Authorization: OAuth + Full Stack Foundations + Intro to AJAX + JavaScript Design Patterns + Configuring Linux Web Servers + Linux Command Line Basics
Prerequisites: Python and git
Cost: $199 / month
Best For:
Developers who want to learn to build web applications from end-to-end.


Intro to Programming

Lead Instructor: Andy Brown
Difficulty:
Beginner
Time: 5 months
Syllabus: Learn to Code + Make a Stylish Webpage + Python Programming Foundations + Object-Oriented Programming with Python + Explore Programming Career Options + Experience a Career Path
Prerequisites: None!
Cost: $399
Best For:
Beginners looking for an accessible approach to coding.


Machine Learning Engineer

Partner: Kaggle
Lead Instructors:
Apran Chakraborty, David Joyner, Luis Serrano
Difficulty:
Advanced
Time: 6 months
Syllabus: Machine Learning Foundations + Supervised Learning + Unsupervised Learning + Reinforcement Learning + Deep Learning + Capstone
Prerequisites: Intermediate Python, statistics, calculus, and linear algebra
Cost: $199 / month
Best For:
Engineers who want to build applications that learn from data.


React

Lead Instructors: Michael Jackson, Ryan Florence, Tyler McGinnis
Difficulty:
Intermediate
Time: 4 months
Syllabus: React Fundamentals + React & Redux + React Native
Prerequisites: HTML, JavaScript, Git
Cost: $499
Best For:
Front-end engineers who want to master the web’s hottest framework. React is the highest-paid sub-field of web development!


Robotics

Partners: Bosch, Electric Movement, iRobot, Kuka, Lockheed Martin, MegaBots, Uber ATG, X
Lead Instructor:
Ryan Keenan
Difficulty:
Advanced
Time: 6 months
Syllabus: ROS Essentials, Kinematics, Perception, Controls, Deep Learning for Robotics
Prerequisites: Intermediate Python, calculus, linear algebra, and statistics
Cost: $2400
Best For:
Makers who dream of building machines that impact everything from agriculture to manufacturing to security to healthcare.


Self-Driving Car Engineer

Partners: Mercedes-Benz, NVIDIA, Uber ATG
Lead Instructor:
David Silver (that’s me!)
Difficulty:
Advanced
Time: 9 months
Syllabus: Deep Learning + Computer Vision + Sensor Fusion + Localization + Path Planning + Control + System Integration
Prerequisites: Intermediate Python, calculus, linear algebra, and statistics
Cost: $2400
Best For:
Engineers who want to join technology’s hottest field and revolutionize how we live.


VR Developer

Partners: Google VR, Vive, Upload, Unity, Samsung
Lead Instructor:
Christian Plagemann
Difficulty:
Advanced
Time: 6 months
Syllabus: Unity + C# + Google Cardboard + Ergonomics + User Testing + Interface Design + Mobile Performance + High-Immersion Unity + High-Immersion Unreal
Prerequisites: None!
Cost: $1200
Best For:
People who want to build new worlds. VR is the most in-demand skill for freelance developers!

Adversarial Traffic Signs

A couple of days ago I wrote about embedding barcodes into traffic signs to help self-driving cars. Several commenters pointed out a recent academic paper in which researchers (Evtimov, et al.) confused a computer vision system into thinking that a stop sign was a 45 mph sign, with just a few pieces of tape.

This appears to be an extension of a property of neural networks that was already known, which is that they can be fooled in surprising ways. This is called an “adversarial” attack.

Here is an example Justin Johnson gave in the fantastic Stanford CS231n class on convolutional neural networks:

Oops.

So it’s no shocker that the computer vision systems for cars, which rely largely on CNNs, can be fooled.

But notice that it’s not obvious how to apply Justin Johnson’s examples above to an actual printed photo of a goldfish in the real world. The examples above only really work if you have a digital photo of a goldfish.

The breakthrough of the Evtimov et al. paper is that they developed an attack algorithm, which they call Robust Physical Perturbations, that allows them to apply this attack to signs in the real world.

So now we are heading down the road of fooling cars into blowing through stop signs. Is the end nigh?

I’m skeptical.

Hackers hardly need to wait until self-driving cars are on the road before they mess with stop signs. It’s easy enough to cause real carnage today just by removing a stop sign. Indeed, this happens already and the people who do it get convicted of manslaughter. (Although note that particular case was overturned on appeal because it wasn’t clear whether the convicts removed the precise stop sign in question, or a different stop sign.)

I don’t see too many hackers messing with street signs, though, presumably because the result is both fleeting and unpredictable, and the cost (jail time) is high.

In fact, self-driving cars seem even less likely than human drivers to be fooled by tampered stop signs. Self-driving cars are likely to have maps and sensors that could override whatever the car’s camera sees.

It’s possible this paper leads to further breakthroughs in adversarial attacks that could cause more problems, but I don’t think this advance by itself is too worrisome.

The Story of Velodyne

Of all the funny stories in the self-driving car world, surely one of the most improbable is the transformation of Velodyne from a subwoofer manufacturer into the world’s premier lidar supplier.

Lidar, an array of lasers, is the key to tracking and understanding the environment around a vehicle, at least until computers get good enough to do this with a camera.

The San Francisco Chronicle has a short writeup of how Dave Hall transformed his audio company into an autonomous sensor company, and I’d love to read the book-length version. It involves the DARPA Grand Challenge and a tinkerer on “the lunatic fringe”. The story is an old-school inventor’s dream.

For now, though, I’m just grateful for Udacity’s two VLP-16 units and our precious HDL-32E.

Also? Velodyne is a Udacity hiring partner.

Self-Driving Road Signs

3M is developing road signs that have specially printed bar codes for self-driving cars, according to Business Insider. This is a clever entry in the vehicle-to-infrastructure communication field.

Often that’s thought of as infrastructure and vehicles communicating back and forth electronically. But this approach, in which the road signs simply have specially encoded information, is much simpler and presumably cheaper.

The article is light on details of how exactly the barcode is written onto the sign, although supposedly the barcode is invisible to humans. Even without that requirement, though, you could imagine tagging each road sign with a small visible barcode, the same way canned goods have barcodes.

Information on the barcode can include the type of sign, of course, but also the GPS coordinates, which would be super-helpful for localization. Other information, about upcoming waypoints or intersections, could also be valuable.

Pretty simple, but effective, and cheap and easy to roll out.