Autonomous Vehicles in the Mines

Self-driving trucks have been a concept in mining operations for many years, because of the well-structured, private roads and dependable routes. Dump trucks basically drive the same route over and over, which makes them an ideal target for autonomous technology.

Diginomica has a good rundown of Caterpillar’s latest work on self-driving mining vehicles in Australia.

“Fortescue Mining Group’s Solomon Hub comprises the Firetail and Kings Valley iron ore mines in the Pilbara region of Australia’s North West which together have a production capacity of over 70 mega tonnes each year. When the project was scoped in 2010, the initial feasibility study called for 75 manned trucks but in July 2011 FMG ordered 12 autonomous 793F vehicles as a pilot. Now with the mines up and running, FMG operates 54 driverless dumpsters which alone results in a $100 million capital saving on twenty trucks.”

There’s also this:

“By replacing the drivers, Westrac and Caterpillar also found they can make further cost savings by eliminating some comfort and safety features on the trucks with weight savings of up to four tonnes per vehicle.”

I’ve had a few people come to me recently asking about how to get up and running in this industry. I’m not that knowledgeable about mining, but the fact that people are asking makes me think this isn’t yet a solved problem.

Stealth versus Transparency

The California DMV reported this week that it had granted Apple a license to test three autonomous vehicles, and the Internet went wild.

Apple’s autonomous vehicle work has been an open secret for a few years, so I’m skeptical that this announcement will change much or lead the way to a more meaningful understanding of what Apple is working on.

Beyond just Apple, though, this has me back to thinking about the tradeoff between stealth and transparency.

Transparency in product development seems like an aggressive approach. By opening up about what they’re doing, companies hopes to attract the best talent, the best partners, the earliest and best customers.

Conversely, a stealth approach seems cautious. Companies developing products in secret seem nervous with competitors and the press. Competitors might steal key elements of a developing product, while the press might pressure a company to alter its schedule or pricing or go-to-market strategy.

All else equal, it seems more fun to be aggressive than cautious, but of course all else is never equal. A company is in a good position and has a lot to lose has a lot more reason to be cautious and stealthy. A company in poor position, with nothing to lose, is likely to act aggressively and transparently.

What gets interesting is when companies like Apple, which seems to have nothing to lose in the automotive industry, approaches product development secretively, perhaps because of its culture.

GM Expands in California

When GM announced its $1 billion acquisition of Cruise Automation, I was skeptical. Cruise was a San Francisco software startup; GM is a venerable American automotive company with a bureaucracy that I presume is optimized for rolling vehicles off the assembly line. It was not obvious that this was a match made in heaven.

But the acquisition is now over a year old and it seems to be working out really well.

The most recent news is that GM will be adding 1100 jobs to its San Francisco office over the next five years.

Self-driving Chevy Bolts are maybe not quite ubiquitous in San Francisco, but they’re normal enough to make me guess that GM is probably the number two tester of Level 4 autonomous vehicles in the Bay Area (and the world?), after Google.

Startup Watch: Luminar

Last fall, I went with some Udacity colleagues to a Silicon Valley Artificial Intelligence event that hosted a panel of speakers from startups in the world of self-driving cars.

One of the speakers was Austin Russell from a then-stealth company producing lidar. We asked his colleague about the name of the company and were told it was a secret.

Later in the event, the crowd started goading another attendee, George Hotz, into grilling the speakers. George rose to the occasion and asked Austin, “So, this all sounds great, but when is Luminar going to ship?”

So much for keeping the name secret.

This week, Luminar went public with all sorts of details about the company, their product, and the first-ever production run of sensors, starting this year.

Austin is a colorful and likeable character, and most of what I’ve read about Luminar quotes him stressing the superior performance of Luminar’s lidar sensors.

That’s awesome, but the main issue with lidar right now seems to be cost and volume, not performance.

Since Luminar has already talked about building 10,000 units, my question might be, what’s the cost?

Autonomous Security

Wired has a good article out about hacking autonomous vehicles, and about “autonomotive attack surfaces” in particular.

The article centers on Charlie Miller, who several years ago hacked a Jeep and took it over remotely while it was driving on the highway (don’t worry, it was a demonstration, not a malicious attack).

Miller talks about the interesting problem of securing vehicles from ride-sharing passengers. In a world where anybody can hail and hop into a self-driving Uber or Lyft, securing those vehicles from hackers who are physically in the car can be a huge challenge.

One example is a hacker who gets into a self-driving car, uses the OBD-II port to install software on the system, and then gets out. Later on, the hacker might use the latent software to take over the car when other riders are inside.

Gives a whole new meaning to “carjacking”.

Miller talks about the “attack surface” of vehicles, which encompasses any opening an attacker can use to hack a vehicle. A quick search for “automotive attack surface” led me to the graphic above, which comes from an academic research paper by Checkoway, et al.

“We discover that remote exploitation is feasible via a broad range of attack vectors (including mechanics tools, CD players, Bluetooth and cellular radio), and further, that wireless communications channels allow long distance vehicle control, location tracking, in-cabin audio exfiltration and theft.”

The further complication is that ridesharing companies are often layering their self-driving software and hardware on top of production automotive vehicles, built by somebody else. It creates a situation where the manufacturer may not design the car to be secure in the same ways that the after-market modifier (in this case, the ridesharing company) needs.

Udacity Students on Computer Vision, Sensor Fusion, Deep Learning, and More

All sorts of interesting topics in this set of student posts, including some inside stories from the creator of ALVINN!

Emphatic Camera Calibration With OpenCV

Chris X Edwards

While trying to undistort his camera images, Chris walked into a store and asked to take a photo of their floor. Then things got really weird.

“I wrote a program that iterated through all possible grid sizes and looked at all images. Now I was finding grids. Ah ha! Turning to the documentation to figure out what exactly was going on, I noticed the function had a parameter, flags, which could be set to enable certain grid finding techniques. I set one of the flags and the grids I could detect changed quite a bit. Now I added to my program another inner loop to iterate through all the detection modes.”

Output Appearance Reliability Estimation

Dean Pomerleau

Dean Pomerleau, the creator of ALVINN, responded to Param Aggarwal with some cool stories about how ALVINN took advantage of confusion in the network to estimate how confident it was about its own steering ability:

“Using the OARE technique and a related one called Input Reconstruction Reliability Estimation (IRRE), ALVINN was able to localize itself (e.g. ‘I’ve reached the fork in the road!’), tell the human safety driver (me) when it needed help, arbitrate between networks trained on different road types, and even tell when there was crap on the windshield in front of the camera obstructing its view of the road.”

Cutting-edge (high-tech) career path.

Uki Dominique Lucas

Uki riffs here on all of the various projects he could be working on, how he chooses to spend his limited time, and where that intersects with career development.

“The next part of the career development is keeping up with the computer science basics. Honestly, it does not matter how much programming you do on daily basis, you will not pass the “whiteboard hazing” without any preparation. I lost countless of interviews with fine companies like Amazon, to what I thought was a “power trip” of some engineer without any social skills in a cookie factory — for years I was saying, “Why do I need that? I can make good money on my own”. Only later, I have read books and articles on interviewing and realized that the “whiteboard” is simply a thing they do and that people prepare for it for months.”

Vehichle detection using LIDAR: EDA, augmentation and feature extraction (Udacity/Didi challenge)

Vivek Yadav

Vivek goes into detail on his voxel-based approach for identifying cars based on the KITTI dataset for the Udacity-Didi Challenge. If you don’t know what a voxel is, read on:

“A voxel is a volume unit in space, similar to pixel in 2D images. I first constrained our space so x-dimension (front), y-dimension (L-R) varied between -30 and 30, and vertical dimension varied between -.1.5 and 1 m. I next constructed voxels of width and length .1 m and height 0.3125 m. I then computed maximum height in each voxel and used this value as the height of the point cloud in that voxel. This gave us a height map of 600X600X5 features. We specifically chose 5 height maps because Udacity’s data uses vlp-16 lidar and having more fine discretization can result in height slices without any points.”

Make sense of Kalman Filter

An Nguyen

What is a Kalman filter? Why do we use it? An gives a more intuitive explanation here than you will find on Wikipedia:

“Assume the car makes the lane change successfully to get in front of me, I still continuously observe the car and adjust my speed so my car can always stay in the safe zone. If the car goes slow, I predict the car will still be slow in the next seconds and I’ll stay at a slow speed behind it. However, if it suddenly goes fast, I can speed up a little bit (as long as under speed limit) and update my belief. What I did there is a continuous process of prediction and update.”

Udacity Students at Track, in the Didi Challenge, and Building Deep Learning Servers

Udacity Self-Driving Car students have been writing about the Self Racing Cars track day, the Didi Challenge, and building their own deep learning machines!

Self Racing Cars 2017 Photo Gallery — The Day Before

Kunfeng Chen

Udacity students were sponsored by PolySync to compete in the Self-Racing Cars track day at Thunderhill last weekend, and these photos show what it was like!

Self Racing Cars 2017 Photo Gallery — Day 1

Kunfeng Chen

Self Racing Cars 2017 Video Gallery — Shot on iPhone 6

Kunfeng Chen

Deep Learning PC Build

Tim Camber

Here’s how Tim built his own GPU-enabled deep learning machine. He provides helpful instructions, a bill of materials, links to graphs comparing the value of different NVIDIA GPUs.

“The GPU is the main component of our system, and hopefully comprises a significant fraction of the cost of the system. ServeTheHome has a nice article in which they show the following graph of GPU compute per unit price.”

part.1: Didi Udacity Challenge 2017 — Car and pedestrian Detection using Lidar and RGB

This is one student’s journal of tackling the Udacity-Didi Challenge. Pay attention to the different neural network architectures he uses!

“Just from these 2 simple steps, I observed the following possible issues:

Small object detection. This is a well-known weakness in the original plain faster rcnn net.

Creation of 2d top view image could be slow. There are quite a number of 3d points needs to be processed

Now that I am sure that the implementation is correct, the next step will be to start training with the actual dataset, which contains many images.”

Voyage

Yesterday Udacity announced that my colleague, Oliver Cameron, is spinning out his own autonomous vehicle company, Voyage.

Friends have texted to ask if that means I’m now part of Voyage, and the answer is no.

I’m staying at Udacity to build the Self-Driving Car Engineer Nanodegree Program, which has thousands of students and is a lot of fun. We’ve launched modules on Deep Learning, Computer Vision, Sensor Fusion, and Localization, with development underway on Control, Path Planning, System Integration, plus several elective modules.

If you’re reading this, you really should sign up for the program 😉

Oliver recruited me to Udacity, gave me lots of room to run, and has been a driving force in building the company for the last three years. While I wish him the best, it’s sad to see him go.

But Voyage is its own independent company, so this won’t affect Udacity’s mission to place our students in jobs with our many amazing hiring partners, like Didi, Mercedes-Benz, NVIDIA, Uber ATG, and many more.

The Udacity Open-Source Self-Driving Car

Last week my colleague Yousuf and I spoke at the Open Source Software for Decision Making Conference at Stanford.

It was a lot of fun! Thanks to Mykel Kochenderfer and Tim Wheeler for inviting us.

Yousuf and I spoke about building the Udacity open source self-driving car. If you’re interested in what Udacity and our students have done, check it out:

You can find all the presentations, including some pretty impressive academic work, at the conference website.

Human and Autonomous Machine Interaction

In a few weeks, I’ll be speaking at Car HMI USA, so please say hi if you’re there.

HMI stands for Human-Machine Interaction, and while I’m at the conference, I’m really excited to hear from UX and HMI engineers about what the future holds for riders of autonomous vehicles.

The Motley Fool predicts that self-driving cars will be great for Netflix and terrible for radio companies, which seems likely, but not particularly creative.

If we spend close to an hour per day in a self-driving car, how will we use that?

Maybe we’ll use it like we use our leisure time: 55% watching TV, 14% socializing, and 8% gaming.

I like to think we can do better. We could use self-driving cars to spend more time with our families — maybe we’ll drag our kids to work with us and have the self-driving car take them home. Maybe we’ll use that time to do housework like paying the bills or online grocery shopping.

Anything but more TV.