Udacity Students on Deep Learning, Hacking, and Autonomous History

Great posts by Udacity Self-Driving Car students on diverse topics! End-to-end deep neural networks, hacking a car, and the history of autonomy.

End-to-end learning for self-driving cars

Alex Staravoitau

This is a concise, practical post detailing how Alex built his end-to-end network for driving a simulated vehicle. His discussion of balancing the dataset is particularly interesting.

Just as one would expect, resulting dataset was extremely unbalanced and had a lot of examples with steering angles close to 0 (e.g. when the wheel is “at rest” and not steering while driving in a straight line). So I applied a designated random sampling which ensured that the data is as balanced across steering angles as possible. This process included splitting steering angles into n bins and using at most 200 frames for each bin

Jetson TX1 and ZED stereo camera warm up.

Dylan Brown

This is the latest in Dylan’s series on hacking his Subaru and turning it into a self-driving car. (This is not part of the Udacity program and we do not recommend this!) In this post, he unpacks his Jetson TX1 and gets the cameras do to some neat tricks.

The lighting conditions seem to make a difference with regard to depth accuracy. I’m excited to see how it performs outdoors. I plan to mount it just in front of my rear view mirror, where it will be mostly hidden from the driver’s field of view. I’m not sure about USB cable routing yet. It’s long enough to reach directly down to the dashboard, but I’d rather conceal it behind some interior panels.

2017: The year for autonomous vehicles

Bill Zito

This is a great historical summary of autonomy, starting with the wheel (really, starting with ALVINN) and going through current efforts at autonomous personal aircraft.

If you had come to this article 10 years ago, hardly anyone would have heard of autonomous cars, or thought them possible for that matter. Now, there are ~100 companies working on autonomous vehicles, dozens of which have already been operating semi-autonomous vehicles.

Udacity Students Experiment with Neural Networks and Computer Vision

The Udacity Self-Driving Car Engineer Nanodegree Program requires students to complete a number of projects, and each project requires some experimentation from students to figure out a solution that works.

Here are five posts by Udacity students, outlining how they used experimentation to complete their projects.

Self-Driving Car Engineer Diary — 4

Andrew Wilkie

Andrew has lots of images in this blog post, including a spreadsheet of all the different functions he used in building his Traffic Sign Classifier with TensorFlow!

I got to explore TensorFlow and various libraries (see table below), different convolutional neural network models, pre-processing images, manipulating n-dimensional arrays and learning how to display results.

Intricacies of Traffic Sign Classification with TensorFlow

Param Aggarwal

In this post, Param goes step-by-step through his iterative process of finding the right combination of pre-processing, augmentation, and network architecture for classifying traffic signs. 54 neural network architectures in all!

I went crazy by this point, nothing I would do would push me into the 90% range. I wanted to cry. A basic linearly connected model was giving me 85% and here I am using the latest hotness of convolution layers and not able to match.

I took a nap.

Backpropagation Explained

Jonathan Mitchell

Backpropagation is the most difficult and mind-bending concept to understand about deep neural networks. After backpropagation, everything else is a piece of cake. In this concise post, Jonathan takes a crack and summarizing backpropagation in a few paragraphs.

When we are training a neural network we need to figure out how to alter a parameter to minimize the cost/loss. The first step is to find out what effect that parameter has on the loss. Then find the total loss up to that parameters point and perform the gradient descent update equation to that parameter.

Teaching a car to drive itself

Arnaldo Gunzi

Arnaldo presents a number of lessons he learned while designing an end-to-end network for driving in the Behavioral Cloning Project. In particular, he came to appreciate the power of GPUSs.

Using GPU is magic. Is like to give a Coke to someone in the desert. Or to buy a new car — the feeling of ‘how I was using that crap old one’. Or to find a shortcut in the route to the office: you’ll never use the long route again. Or to find a secret code in a game that give superpowers…

Robust Extrapolation of Lines in Video Using Probabilistic Hough Transform

Esmat Nabil

Esmat presents a well-organized outline of his Finding Lane Lines Porject and the computer vision pipeline that he used. In particular, he has a nice explanation of the Hough transform, which is a tricky concept!

The probabilistic Hough line transform more efficient implementation of Hough transform. It gives as output the extremes of the detected lines (x0, y0, x1, y1). It is difficult to detect straight lines which are part of a curve because they are very very small. For detecting such lines it is important to properly set all the parameters of Hough transform. Two of most important parameters are: Hough votes and maximum distance between points which are to be joined to make a line. Both parameters are set at their minimum value.

Udacity Students on Cutting-Edge Autonomous Vehicle Tools

Students in Udacity’s Self-Driving Car Engineer Nanodegree Program go above and beyond to build terrific implementations of vehicle detectors, lane line detectors, neural networks for end-to-end learning, and career advice.

Small U-Net for vehicle detection

Vivek Yadav

In the Vehicle Detection Project, students use standard computer vision methods to detect and localize vehicles in images taken from highway driving. Vivek went well beyond standard computer vision methods, and used U-Net, an encoder-decoder architecture that has proven effective for medical imaging. The results are astounding.

Another advantage of using a U-net is that it does not have any fully connected layers, therefore has no restriction on the size of the input image. This feature allows us to extract features from images of different sizes, which is an attractive attribute for applying deep learning to high fidelity biomedical imaging data. The ability of U-net to work with very little data and no specific requirement on input image size make it a strong candidate for image segmentation tasks.

My Lane Detection Project in the Self Driving Car Nanodegree by Udacity

Param Aggarwal

Param provides a great walkthrough of his first project — Finding Lane Lines. He also includes a video that shows all of the intermediate steps necessary to find lane lines on the road. Then he applies his computer vision pipeline to a new set of videos!

This is the most important step, we use the Hough Transform to convert the pixel dots that were detected as edges into meaningful lines. It takes a bunch of parameters, including how straight should a line be to be considered a line and what should be the minimum length of the lines. It will also connect consecutive lines for us, is we specify the maximum gap that is allowed. This is a key parameter for us to be able to join a dashed lane into a single detected lane line.

Extrapolate lines with numpy.polyfit

Peteris Nikiforovs

Leading up to the Finding Lane Lines project, we teach students about some important computer vision functions for extracting lines from images. These are tools like Hough transforms and Canny edge detection. However, we leave it to the students to actually identify which lines correspond to the lane lines. Most students find some points and extrapolate y=mx+b. Peteris went beyond this, though, and taught himself how to use the numpy.polyfit() function in order to identify the line equation automatically!

If return to the original question, how do we extrapolate the lines?

Since we got a straight line, we can simply plug in points that are outside of our data set.

An augmentation based deep neural network approach to learn human driving behavior

Vivek Yadav

While training his end-to-end driving network for the Behavioral Cloning project, Vivek made us of extensive image augmentation. He flipped his images, resized them, added shadows, changed the brightness, and applied vertical and horizontal shifts. All of this allowed his model to generalize to an entirely new track that it had never seen before.

This was perhaps the weirdest project I did. This project challenged all the previous knowledge I had about deep learning. In general large epoch size and training with more data results in better performance, but in this case any time I got beyond 10 epochs, the car simply drove off the track. Although all the image augmentation and tweaks seem reasonable n0w, I did not think of them apriori.

But, Self-Driving Car Engineers don’t need to know C/C++, right?

Miguel Morales

Miguel’s practical post covers some of the different angles from which a self-driving car engineer might need to know C++, ROS, and other autonomous vehicle development tools. It’s a great read if you’re looking for a job in the industry!

Self-Driving Car Engineers use C/C++ to squeeze as much speed out of the machine as possible. Remember, all processing in autonomous vehicles is done in real-time and even sometimes in parallel architectures, so you will have to learn to code for the CPU but also the GPU. It is vital for you to deliver software that can process large amount of images (think about the common fps — 15, 30 or even 60) every second.

Udacity Students on Neural Networks, AWS, and Why They Enrolled in CarND

Here are five terrific posts by Udacity Self-Driving Car students covering advanced convolutional neural network architectures, how to set up AWS instances, and aspirations for CarND.

Traffic signs classification with a convolutional network

Alex Staravoitau

Alex took the basic convolutional neural network tools we teach in the program, and built on them to create a killer traffic sign classifier. He used extensive data augmentation, and an advanced network architecture with multi-scale feature extraction.

Basically with multi-scale features it’s up to classifier which level of abstraction to use, as it has access to outputs from all convolutional layers (e.g. features on all abstraction levels).

Self Driving Car Nanodegree Experience So Far….

Sridhar Sampath

Sridhar has a fun summary of his experience in the program so far, including great detail about some sophisticated data augmentation and network architectures that he used. I also laughed when mentioned why he enrolled.

So then why did I choose this course over other available courses? “The main reason was that I have experience in ADAS so this course was a perfect fit for my career passion”. Also, it was like a monopoly.

Detecting lanes

Subhash Gopalakrishnan

Subhash has clear and concise descriptions of the computer vision tools he uses for his Finding Lane Lines Project. A bonus section includes him trying to find lanes on roads in India!

The part remaining is to discover lines in the edge pixels. Before attempting this, we need to rethink a point in terms of all the lines that can possibly run through it. Two points will then have their own sets of possible lines with one common line that runs through both of them. If we could plot the line-possibilities of these two points, both points will “vote” for that line that passes through both of them.

AWS setup for Deep Learning

Himanshu Babal

Himanshu has a great tutorial on how to set up an AWS EC2 instance with a GPU to accelerate deep learning. It includes tips on how to get free AWS credits! (I should note that since Himanshu wrote this we have included our own tutorial within the program, but this is still a great post and more free credits are always welcome!)

I will be helping you out in the following setup
* AWS Account setup and $150 Student Credits.
* Tensorflow-GPU setup with all other libraries.

Udacity Will Help Me To Achieve My Goals

Mojtaba VĂ lipour

Mojtaba joins us from Iran, which is really inspiring given the backdrop of world events right now. We are excited to have him and he is excited to be in the program!

Maybe Sebastian Thrun has no idea who am I and how much respect I have for him. I made a autonomous vehicle because I saw his course (Artificial Intelligence for Robotics), I learned a lot from him and the power of ROS (Robot Operating System). I really love this field of study and I follow everything related to autonomous vehicles since 2004 (When DARPA started everything). And now I am in the first Cohort in the Self Driving Cars Nano Degree (SDCND) thanks to David Silver, Todd Gore, Oliver Cameron, Stuart Frye and other Udacians.

Udacity Students on Lane Lines, Curvature, and Cutting-Edge Network Architectures

Here is a terrific collection of blog posts from Udacity Self-Driving Car students.

They cover the waterfront — from debugging computer vision algorithms, to detecting radius of curvature on the road, to using Faster-RCNN, YOLO, and other cutting edge network architectures.

Bugger! Detecting Lane Lines

Jessica Yung

Jessica has a fun post analyzing some of the bugs she had to fix during her first project — Finding Lane Lines. Click through to see why the lines above are rotated 90 degrees!

Here I want to share what I did to investigate the bug. I printed the coordinates of the points my algorithm used to extrapolate the linens and plotted them separately. This was to check whether the problem was in the points or in the way the points were extrapolated into a line. E.g.: did I just throw away many of the useful points because they didn’t pass my test?

Towards a real-time vehicle detection: SSD multibox approach

Vivek Yadav

Vivek has gone above and beyond the minimum requirements in almost every area of the Self-Driving Car program, including helping students on the forums and in our Slack community, and in terms of his project submissions. He really outdid himself with this post, which compares using several different cutting-edge neural network architectures for vehicle detection.

The final architecture, and the title of this post is called the Single Shot Multibox Detector (SSD). SSD addresses the low resolution issue in YOLO by making predictions based on feature maps taken at different stages of the convolutional network, it is as accurate and in some cases more accurate than the state-of-the-art faster-RCNN. As the layers closer to the image have higher resolution. To keep the number of bounding boxes manageable an atrous convolutional layer was proposed. Atrous convolutional layers are inspired by “algorithme a trous” in wavelet signal processing, where blank filters are applied to subsample the data for faster calculations.

CNN Model Comparison in Udacity’s Driving Simulator

Chris Gundling

This is a fantastic post by Chris comparing and contrasting the performance of two different CNN architectures for end-to-end driving. Chris looked at an average-sized CNN architecture proposed by NVIDIA, and a huge, VGG-style architecture he built himself.

I experimented with various data pre-processing techniques, 7 different data augmentation methods and varied the dropout of each of the two models that I tested. In the end I found that while the VGG style model drove slightly smoother, it took more hyperparameter tuning to get there. NVIDIA’s architecture did a better job generalizing to the test Track (Track2) with less effort.

Hello Lane Lines

Josh Pierro

Josh Pierro took his lane-finding algorithm for a spin in his 1986 Mercedes-Benz!

From the moment I started project 1 (p1 — finding lane lines on the road) all I wanted to do was hook up a web cam and pump a live stream through my pipeline as I was driving down the road.

So, I gave it a shot and it was actually quite easy. With a PyCharm port of P1, cv2 (open cv) and a cheap web cam I was able to pipe a live stream through my model!

Udacity SDCND : Advanced Lane Finding Using OpenCV

Paul Heraty

Paul does a great job laying out his computer vision pipeline for detecting lane lines on a curving road. He even compares his findings for radius of curvature to US Department of Transportation standards!

Overall, my pipeline looks like the following:

Let’s look at each stage in some detail

CarND Students on Preparation, Generalization, and Hacking Cars

Here are five great posts from students in Udacity’s Self-Driving Car Engineer Nanodegree Program, dealing with generalizing machine learning models and hacking cars!

SDC

Daniel Stang

Daniel has devoted a section of his blog to the Self-Driving Car projects, including applying his lane-line finder to video he took himself!

The first project for the Udacity Self-Driving Car Nanodegree was to create a software pipeline capable of detecting the lane lines in video feed. The project was done using python with the bulk of work being performed using the OpenCV library. The video to the side shows the software pipeline I developed in action using video footage I took myself.

Traffic Sign Classifier: Normalising Data

Jessica Yung

Jessica’s post discusses the need to normalize image data before feeding it into a neural network, including a bonus explainer on the differences between normalization and standardization.

The same range of values for each of the inputs to the neural network can guarantee stable convergence of weights and biases. (Source: Mahmoud Omid on ResearchGate)

Suppose we have one image that’s really dark (almost all black) and one that’s really bright (almost all white). Our model has to address both cases using the same parameters (weights and biases). It’s hard for our model to be accurate and generalise well if it has to tackle both extreme cases.

Hardware, tools, and cardboard mockups.

Dylan Brown

Dylan is a student in both the Georgia Tech Online Master’s in Computer Science Program (run by Udacity) and also in CarND. He’s also turning his own Subaru into a self-driving car! (Note: We do not recommend this.)

Below I’ve put together a list of purchases needed for this project. There will definitely be more items coming soon, at least a decent power supply or UPS. Thankfully, this list covers all the big-ticket items.

Jetson TX1 Developement Kit (with .edu discount) NVIDIA $299
ZED Stereo Camera with 6-axis pose Stereolabs $449 CAN(-FD) to USB interface PEAK-System $299 
Touch display, 10.1” Toguard $139 
Wireless keyboard K400 Logitech $30 
Total $1216

Self-Driving Car Engineer Diary — 1

Andrew Wilkie

Andrew has a running blog of his experiences in CarND, including his preparation.

I REALLY want a deep understanding of the material so followed Gilad Gressel’s recommendation (course mentor): Essence Of Linear Algebra (for linear classifiers which is step 1 towards CNNs), Gradients & Derivatives (for back propagation understanding) and CS231n: Convolutional Neural Networks for Visual Recognition lectures (for full Neural Networks and Convolutional Deep Neural Networks understanding).

Comparing model performance: Including Max Pooling and Dropout Layers

Jessica Yung

Another post by Jessica Yung! This time, she runs experiments on her model by training with and without different layers, to see which version of the model generalizes best.

Means of training accuracy - validation accuracy in epochs 80-100 (lower gap first):

Pooling and dropout (0.0009)
Dropout but no pooling (0.0061)
Pooling but no dropout (0.0069)
No pooling or dropout (0.0094)

Udacity Self-Driving Car Students on Neural Networks and Docker

I’ve spent the past several days highlighting posts from Udacity Self-Driving Car students in areas related to computer vision, neural networks, careers, and more.

I’ll be doing more of that today, and for the next several days. Then we’ll return to our regularly scheduled programming, although I expect to periodically include more student posts each week.

Teaching a car to mimic your driving behaviour

Subodh Malgonde

Subodh summarizes his approach to building an end-to-end network for driving. He was even able to get his network to drive the car on a track it had never seen before!

The true test of a neural network is how well it performs on unseen data i.e. data not used in training. To evaluate this the simulator had a second track which was very different from the one used for training. It was darker, had slopes while the first track was more or less flat, had sharper turns and more right turns as compared to the first track. The network had never seen data from this track. However some of these differences were accounted for in the network due to image augmentation techniques described above.

Udacity Self-Driving Car Nanodegree Project 2 — Traffic Sign Classifier

Jeremy Shannon

Jeremy exploited some pretty cool data augmentation techniques to help his traffic sign classifier generalize better. As a consequence, his model was able to correctly classify many different images that he found on the Internet.

Wait… a hundred percent? Seriously? I don’t know how this model can be that sure of its prediction. Granted, I gave it some real softballs, there. But still, I would have expected them all to be in the 75–90% range. Whatever — I’ll take it! Good job, model!

Self-Driving Car Engineer Diary — 3

Andrew Wilkie

Andrew has a running journal documenting his journey through CarND. In this post, he provides great notes about the deep learning content the course provides in the run-up to the Traffic Sign Classifier Project.

Hi. Dived into part 1 of Deep Learning during the last 2 weeks. The course covered a lot of material over 7 lessons, defined with succinct, well commented, executable code … just how I like it!

Neural Network Tuning with TensorFlow

Param Aggarwal

Param built a neural network to classify traffic signs and in the process, spent a lot of time thinking about how to size the network, preprocess the images, and tune hyperparameters.

Or, how I struggled with Project 2: Traffic Sign Classfication as part of the Self-driving Car Engineering Nanodegree from Udacity. In the end, the struggle taught me more than the results I got.

Docker Image for the Udacity Self-Driving Car Nanodegree (with UI)

Youcef Rahal

One of the big challenges when we launched the program was helping to get everybody’s software packages working. We finally published the Udacity CarND Starter Kit, but we also rely on students like Youcef to help their fellow students by guiding them on setup.

The image I created is at https://hub.docker.com/r/yrahal/udacity-carnd, and is built based on the install steps required for the Lane Lines Finding Project (first project of the Nanodegree). The Dockerfile I used to create the container is inspired from the Anaconda3 Dockerfile and on instructions on how to run a VNC server inside a container.

Udacity Students on Computer Vision, Neural Networks, and Careers

Finding the right parameters for your Computer Vision algorithm

maunesh

In this post maunesh discusses the challenges of tuning parameters in computer vision algorithms, specifically using the OpenCV library. maunesh built a GUI for parameter tuning, to help him develop intuition for the effect of each parameter. He published the GUI to GitHub so other students can use it, too!

For Canny edge detection algorithm to work well, we need to tune 3 main parameters — kernel size of the Gaussian Filter, the upper bound and the lower bound for Hysteresis Thresholding. More info on this can be found here. Using a GUI tool, I am trying to determine the best values of these parameters that I should use for my input.

Behavioral Cloning For Self Driving Cars

Mojtaba Valipour

In this post, Mojtaba walks through the development of his behavioral cloning model in detail. I particularly like the graphs he built to visualize the data set and figure out which approaches would be most promising for data augmentation.

Always the first step to train a model for an specific dataset is visualizing the dataset itself. There are many visualization techniques that can be used but I chose the most straightforward option here.

Building a lane detection system using Python 3 and OpenCV

Galen Ballew

Galen explains his image processing pipeline for the first project of the program — Finding Lane Lines — really clearly. In particular, he has a admirably practical explanation of Hough space.

Pixels are considered points in XY space

hough_lines() transforms these points into lines inside of Hough space

Wherever these lines intersect, there is a point of intersection in Hough space

The point of intersection corresponds to a line in XY space

What kind of background do you need to get into Machine Learning?

Chase Schwalbach

This is a great post for anybody interested in learning about self-driving cars, but concerned they might not be up to the challenge.

I’ll put the summary right up top — if I can do it, you can too. I wanted to share this post to show some of the work I’m doing with Udacity’s Self-Driving Car Nanodegree, and I also want to share some of my back story to show you that if I can do it, there’s nothing stopping you. The only thing that got me to this point is consistent, sustained effort.

Self-driving car in a simulator with a tiny neural network

Mengxi Wu

Mengxi wasn’t satisfied training a convolutional neural network that successfully learns end-to-end driving in the Udacity simulator. He systematically removed layers from his network and pre-processed the images until he was able to drive the simulated car with a tiny network of only 63 parameters!

I tried the gray scale converted directly from RGB, but the car has some problem at the first turn after the bridge. In that turn, a large portion of the road has no curb and the car goes straight through that opening to the dirt. This behavior seems to related to that fact that the road is almost indistinguishable from the dirt in grayscale. I then look into other color space, and find that the road and the dirt can be separated more clearly in the S channel of the HSV color space.

More Udacity Self-Driving Car Students, In Their Own Words

Yesterday I shared 5 amazing blog posts by students in the Udacity Self-Driving Car Nanodegree Program.

Here are 5 more!

DeepTrafficJS Solution

Anton Pechenko

Anton is a student in the Udacity Self-Driving Car Program, and also in MIT’s class on Deep Learning for Self-Driving Cars. He is currently third in MIT’s Deep Traffic competition, and he reviews his deep neural network in this video. Pay attention to his choice of activation functions!

My first self-driving car

Bill Zito

Bill has a really nice walkthrough of some of the key lessons he learned while completing the final project in the Deep Learning Module. This project, Behavioral Cloning, requires students to drive a car in a simulator, record their driving data, use that data to train a neural network, and then use that network to drive the car.

This challenge is no walk in the park, and that’s part of what makes it really fun. You’re implementing similar code to the code that drives self-driving cars in real life. And you’re required to think through lots of the steps of the process by yourself to do so. If you end up stuck, remember that the experts just figured out this stuff was even possible in the last couple years.

German Traffic Sign Classification Using Deep Learning

Muddassir Ahmed

Muddassir gives a great explanation of what a neural network is, and what a convolutional neural network is — including the history behind them! Then he explains how he implemented the Traffic Sign Classifier Project.

The CNN was inspired by the work of Hubel and Wiesel back in the 1950s and 1960s. In the study, they discovered that the mammalian brain was structured hierarchically and that objects were recognized based on the hierarchical build-up of features from small ones, such as colors, stripes, lines into bigger ones such as patterns and even larger concepts of dog, cat, human e.t.c.

Self-Driving Car Engineer Diary — 2

Andrew Wilkie

We put a lot of effort into making the first week of the program fun and rewarding, so students understand from the very beginning what they might be able to accomplish. Andrew has a great journal of how his first week in the program went.

Andrew Gray popped into our student ‘ama’ Slack channel for a 30 minute Q&A. Students were asking a lot about future job opportunities in this new Self-Driving vehicle industry. Some were concerned with the rapid rate of improvement and that by the time we graduate (Sep/2017 for my Dec/2016 co-hort) that we might have missed the best opportunities. Andrew highlighted the fact many existing companies outside of Self-Driving cars and trucks are actively pursuing combined AI / Robotics strategies and completing this intensive 9 month training readies us for this new industry.

Udacity Self-Driving Car Nanodegree Project 1 — Finding Lane Lines

Jeremy Shannon

Jeremy says some really nice things about the Udacity program in his post, and then outlines the steps he took to complete the Finding Lane Lines Project. This was my own very first project as an autonomous vehicle engineering student, so it is near and dear to my heart.

This is the best online course (or, should I say, collection of courses) I’ve taken so far. Yes, even better than Fire Safety Refresher Training. Really! The quality is top-notch (both video and written/supplemental material), the feedback is amazing, and the community they’ve built around it is incredibly helpful. (I wish that during my undergrad days I’d had an online forum I could go to and find that dozens of other students were having the same problem I was having.) It’s so easy to become completely immersed in the subject material, and I’m so thankful that this program exists. Udacity has really outdone themselves and I can’t possibly heap enough praise on them.

Self-Driving Car Student Posts

One of the most fun things about the Udacity Self-Driving Car Engineer Nanodegree Program is the student community. Thousands of students are active on Slack and in the forums, helping each other complete projects, understand concepts, and get jobs in the autonomous vehicle industry.

Students also write online about their experiences in the program and what they’ve accomplished. Here are five, in their own words.


Behavioral Cloning for Autonomous Vehicles

David Ventimiglia

It often pays to explore your data with relatively few constraints before diving in to build and train the actual model. One may gain insights that help guide you to better models and strategies, and avoid pitfalls and dead-ends.


A transforming reality — Udacity’s Self Driving Car Nano-degree

Vishal Rangras

I was so much into this program that I started talking about it with senior folks at my workplace. They encouraged me, supported me and appreciated my love for the technology and research. They got interested in knowing more about the program. They got stunned about this state-of-the-art program and they wanted to help me pursue my dream of research. They extended a helping hand to me by arranging an online fundraiser to support me.


Studying for the Udacity SDCND, or How I got my law license

Michael Toback

Udacity has a great program, but they refer you to other sources as you go, both inside and outside of Udacity. Hurrying through it won’t help you, because people need time to develop and strengthen the neural pathways that help you really learn something.


Why I enrolled in Udacity Self-Driving Car Classes

Boris Dayma

I’m currently in a very different industry than tech and automotive (I’m actually in the oil & gas industry). However I’ve always tried to apply latest innovations to assist me in my daily activities. This has helped me in automating the “boring tasks” to let me focus on more fun challenges, leading to new creative solutions.


Experiment Using Deep Learning to find Road Lane Lines

Paul Heraty

I modified a neural network that I had used in the SDCND BehavioralCloning lab (5 CNN layers followed by 3 FCNN layers), and added 5 new outputs to it. So now the network looks like 5 CNN layers with 6x 3 FCNN layers. The outputs are generating lane polynomial coefficients for both the left and right lanes, i.e. a*y² + b*y + c where I’m predicting a, b & c for each lane.