With the 2016ā17 winter nearly behind us, the tally is in: The DPWās fleet of 200 self-driving snowplows destroyed 3,019 parked cars, killed or injured 29 stray cats, created 17,898 potholes/sinkholes and sent one elderly South Side man to the hospital after burying him in a snow drift.
āYeah, I guess Milwaukee isnāt quite ready for this technology,ā admits Kowolski. āWe are considering āhiringā monkeys to drive the plows next season.ā
But consider the vendor:
In its pilot program, the City had considered using well-tested self-driving plows built by Google and Tesla, but instead opted to install hardware from RadioShack in its existing trucks.
They tested the equipment in Mountain View, of course.
We work directly with recruiters at these companies to identify open positions that Udacity students might be interested in. Then we announce those positions in the Udacity Career Resource Center, and encourage students to apply.
Once students apply, we connect students to employers and guide students through the interview process.
Udacity has been focused on careers for several years, but this level of support is new to the Self-Driving Car Program, and weāre really excited about it. As Sebastian said recently, you canāt talk about education today without talking about jobs.
There is an angle here that ties into the Michigan vs. Silicon Valley competition for autonomous vehicle development, but thatās not what interests me.
What interests me is what this move says about Big Data in automotive applications. Thus far, most autonomous vehicle development work has proceeded with relatively small amounts of data, certainly compared the amount of data that companies like Google deal with.
Fordās investment this new Flat Rock data center portends a future in which autonomous vehicle teams need to know about Hadoop and Spark, in addition to deep learning and robotics.
Iāll be speaking at two conferences this week! So much talking.
Today Iāll be talking at 4pm at the Global Data Science Conference about āHow to Become a Self-Driving Car Engineerā.
This is kind of last-minute (sorry!), but I have some free passes to give away for that conference, so send me an email (david.silver@udacity.com) if you want one.
One of the big challenges with working on cutting-edge technology is the lack of established tools to rely on. Sometimes you have to build your own.
Here are tools that different Udacity Self-Driving Car students built to help them solve problems related to deep learning, computer vision, and the Didi Challenge!
Alex provides great step-by-step analysis of his lane detection and vehicle tracking software. I really like his detailed explanation of the feature-tracking pipeline:
āAfter experimenting with various features I settled on a combination of HOG (Histogram of Oriented Gradients), spatial information and color channel histograms, all using YCbCr color space. Feature extraction is implemented as a context-preserving class (FeatureExtractor) to allow some pre-calculations for each frame. As some features take a lot of time to compute (looking at you, HOG), we only do that once for entire image and then return regions of it.ā
Jonathan built a really cool independent project to estimate vehicle speed from camera images. I really enjoyed his explanation of using optical flow for velocity:
āThe Farneback method computes the Dense optical flow. That means it computes the optical flow from each pixel point in the current image to each pixel point in the next image.ā
Galen is particularly interested in how to deploy neural networks in industry. To that end, he ran an experiment to see how well and how quickly various neural networks converged on classifying a training set:
āThese networks (especially ResNet50 in this case) required extremely little training time and were relatively easy to implement. Once there is a proof of concept, it is a lot easier to write an optimized network that suits your needs (and maybe mimics the network you transfer learned from) than it is to both write and train from scratch.ā
One of the knocks on neural networks is that theyāre black boxes. Figuring out what drives their decisions is hard. Param built a tool to help visualize the internals of his network:
āOn the right we have our Udacity Simulator running. On the left is my little React app that is visualizing all the outputs of the convolutional layers in my neural network.ā
Cherkeng is keeping a diary of his work on the Didi Challenge!
āDuring development, visualization is very important. It helps to ensure that the code implementation and mathematical formulation are correct. I first covert a rectangular region of lidar 3d point cloud into a multi-channel top view image. I use the kitti dataset for my initial developmentā
When Udacity launched the Self-Driving Car Program last summer, Otto, which has since become Uber Advanced Technologies Group, was there from the beginning.
They only had 100 people in a semi-deserted warehouse, and not a lot of engineering talent to spare. But they have always been so enthusiastic to spend time teaching students about self-driving cars.
Drew Gray, in particular, is a wonder of an engineer. Whether we need help with deep learning, or control, or tying computer vision to trajectory planning, Drew knows everything and is always excited to help Udacity students.
Thereās a wrinkle at the end of one of our Computer Vision projects where we ask students to use lane lines to calculate the radius of curvature of the road. This seems like a weird thing to do, but it ultimately ties into trajectory planning. We only made the leap because Drew pointed it out to us.
Uber has received a lot of criticismrecently, and maybe deservedly so. I donāt have much insight into that and you can read better analyses elsewhere.
But I know the folks at Uber ATG, and they have been great partners and great advocates for students who want to get jobs working on autonomous vehicles. And I am grateful for that.
Now would be a good time reiterate something I havenāt posted in a while, which is that this is a personal blog. I have an agenda, which is pretty transparent in my Medium bio. But I donāt speak for Udacity, or CarND students, and certainly not for Udacity partners.
These topics are taught by expert engineers from Mercedes-Benz and Uber ATG, who teach the skills they actually use on the job. These are also the skills students need to know get jobs working on autonomous vehicles.
Although thereās a lot of math and programming involved, there are also some good stories. Watch this video of Dominik and Andrei, from the Mercedes-Benz Sensor Fusion Team, talking with Sebastian about how the unscented Kalman filter got its name.
And if youād like to understand what the unscented Kalman filter is, hereās Dominik (this one involves more math):
If you want to explore different areas of computer vision, you should check out these awesome posts by Udacity students on different ways to use OpenCV to find lane lines.
Upon completing all of his Term 1 projects, Arnaldo wrote a high-level overview of all of the projects, and reflected on what he learned:
āIt had a very practical focus: theory enough to understand the core concepts, and then, the practical application. It is a reason why it requires a lot of background. It is not a course on basic python, or basic neural networks, but how to apply it in real cases.ā
Sujay has an incredibly thorough analysis of his computer vision pipeline for lane-finding, including a great debugging tool:
āThis project involves fine tuning of lot of parameters like color thresholding, gradient thresholding values to obtain the best lane detection. This can be trickier if the pipeline fails for few video frames. To efficiently debug this I had to build a frame that captures multiple stages of the pipeline, like the color transformation, gradient thresholding, line fitting on present and averaged past frames.ā
Andrew has posts fun images from his lane-finding pipeline, but what really caught my eye was his analysis of Udacityās Career Services:
āThe SDC Engineer course emphasises job readiness and the Udacity team provides an excellent Careers Service built right in. We were asked to search for an advertised job that interested us, provide a resume tailored to either Entry Level, Prior Experience or Career Change and the associated Cover Letter. For a ācareer changerā like me, I was surprised by the amount of self-reflection this caused.ā
Morgane is completing CarND while on a nine-month world tour, starting in Ecuador!
āOur next stop was Cuenca, Ecuador, where my fianceās immediate family lives. I had an amazing time there, visiting the city and Cajas National Park. The only issue is that people did not understand why I was spending so much time on my laptop! They expected me to be free all the time since I was on holidays. I had set up a routine where Iād work for a few hours in the morning while waiting for people to get up and at night. After explaining to them what my goal was and showing them what I was doing, they were definitely more understanding. They got particularly interested when I showed them how I had trained a neural network to drive a car in a simulator, and how I used computer vision and machine learning to recognize lanes and other vehicles on the road.ā
Liam took the introductory Udacity Lane-Finding Project and optimized it:
āI created a buffer to store the slope and y-intercept values for each line detected in the last N frames. The actual line drawn on the current frame is simply the average slope/intercept of all these lines. By continuously pushing the latest detected line onto this buffer and simultaneously dropping the oldest line, I can calculate a rolling mean of the lines over time, or what I call ātemporal smoothingā.ā
The very first partner to sign up was NVIDIA. The NVIDIA team is super-excited about the Udacity Nanodegree Program and is actively interviewing students in the program, even before they graduate.
If youād like to learn more about how NVIDIA drives autonomous vehicle technology, watch the video we made with them: