Next Saturday, September 15, I am excited to speak at Navimotive, organized by Intellias, in Kiev, Ukraine!
Self-driving cars really are becoming a world-wide phenomenon, with contributions across the entire world. Intellias is a Ukranian automotive supplier that works with companies like Volkswagen, HERE, and Yandex. There will also be speakers from GlobalLogic, CloudMade, MAPS.ME, and more.
See the prototype in actionâ10km of autonomous highway driving!
One of the great joys of teaching self-driving cars at Udacity is watching the amazing things our students build. Iâm based in Silicon Valley, but our students come from all over the world, and I have the opportunity on a daily basis to experience the truly global nature of this transportation revolution.
One of our Udacity self-driving car graduates, Shilpaj Bhalerao, is the tech lead for a small team building self-driving trucks in India. This is no small feat. India is even more of a challenge for self-driving cars than other parts of the world, due to the complexity of traffic norms, and the infrastructure quality. So itâs super-exciting to watch their 10km autonomous trucking run in action.
Check it out.
No matter where you are in the world, if your dream is to contribute to the future of autonomous transportation, then you can acquire the skills you need with Udacity. Check out our School of Autonomous Systems today.
Compared to the Waymo Safety Report released last year, the Ford document is a little more dense, but broadly similar. Both documents discuss how autonomous vehicles work, what sensors are involved, and what processes are in place to ensure safety.
Fordâs document includes more detail about training and procedures for safety operators, as you might expect in the wake of the Uber ATG collision in Arizona.
Ford also share a bit more information about security. The report mentions efforts related to authentication, network segmentation, and physical partitioning.
All in all, itâs a good read to see how a larger autonomous vehicle organization operates.
âDriving around Naples at 25mph for a year is a triumph, but imagine a self-driving car that can navigate Naples, safely, at 40mph? Or faster? Equipped with an array of sensors and super-computers, self-driving vehicles could react quicker and achieve safer speeds well beyond the ability of human drivers.
Similarly, US highway capacity tops out at about 2,000 vehiclesper hour, assuming human drivers. Self-driving (and potentially connected) cars, however, may be able to dramatically decrease the distance between vehicles, and even move in unison with other cars. Think of a pack of bumper-to-bumper race cars traveling at 100mph. The improvement in road capacity would be tremendous.â
A few weeks ago Lyft and Aptiv announced their 5000th paid self-driving car ride in Las Vegas. The Lyft blog announcement quotes Raj Kapoor: âLyft is the largest network currently deploying a commercial self-driving program to the public.â
I believe that is correct and may, in fact, slightly understate Lyftâs position. Lyft is one of the only companies offering self-driving rides to the general public, since Uber halted its autonomous vehicle testing program earlier this year.
Waymo and several other companies have self-driving car pilots available to pre-screened participants, but few companies are currently opening their network to any member of the general public.
This Thursday, August 23rd, at 9am Pacific Time, I will be hosting an online open house for Udacityâs School of Autonomous Systems. RSVP now to join me, my Udacity colleagues, our alumni, and other potential students, and learn about our many exciting programs!
At the Open House, Iâll share an overview of each program, compare them, and describe the careers for which each option prepares you.
Weâll finish with a live question-and-answer session, co-hosted by myself and several of my Udacity instructional colleagues, to answer as many of your questions as we can.
RSVP now to join us on Thursday! And donât worry if you canât make it! RSVP via the link, and weâll send you the recording of the open house so you donât miss out!
Self-Driving Fundamentals: Featuring Apollo is a terrific, free introduction to how self-driving cars work, through the lens of the Apollo open-source self-driving car project.
Itâs perfect for beginners who are in the exploratory phase of their autonomous systems journey, and it does a great job demonstrating how existing basic programming and data skills can be applied to this field.
Upon successfully completing the program, youâll be ready to combine your newly-acquired self-driving car fundamentals with your existing programming skills to enroll in our Intro to Self-Driving Cars Nanodegree program.
The course begins with an overview of self-driving cars, and how they work. From there, we dive into Apollo and itâs multi-layer architecture: reference vehicle, reference hardware, open-source software, and cloud services.
This is a great opportunity to learn the key parts of self-driving cars, and get to know the Apollo architecture. Youâll utilize Apollo HD maps, localization, perception, prediction, planning and control, and start learning the fundamentals of how to build self-driving cars.
Get the inside scoop on how self-driving cars work, and learn about career opportunities in the autonomous vehicle industry!
A few days ago I hosted a webinar with my colleague Jane Sheppard about how self-driving cars work, and about career opportunities in the autonomous vehicle industry. If youâre interested in self-driving cars, you should watch đ
I talk about business models in the self-driving car space, run down whoâs focused on hardware vs. software, and describe all the players in the arena, from the smallest startups to the biggest global organizations (you wouldnât believe how much money some of the big companies are putting into this technology!). I specifically discuss Teslaâs consumer sales model, and the impact and involvement of ride-sharing companiesâand thatâs all before I even get to the core technologies that make self-driving cars possible đ
If you want to get the low-down on perception, localization, and planning, just hit play below!
âIf sensor fusion and computer vision tell you what the world looks like, localization tells you where you are in that world.â
We thought that if we could give people the best education available anywhere, that those students would form the next generation of autonomous vehicle engineers. And they would go out into the industry, and look back to Udacity as a source of talent to hire from themselves.
At that point, the industry would be open to anybody, anywhere, with the desire and passion to learn how self-driving cars work.
Our partners at Mazda just sent over a job description that includes the âSelf-Driving Car Nanodegreeâ as a qualification, and it makes me feel like weâre getting closer to that goal.
In these projects, students showcase initiative, creativity, and work ethic, as they build projects focused on topics like perception, deep learning, and computer vision.
Their intense interest in these topics translates directly to the high quality of their work. Today, Iâd like to share three especially impressive student projects with you that cover these areas!
I love this project! Mohammad did it on his own, and he went way beyond the requirements of the Nanodegree program to do so. Thatâs going to serve him well in the long run, because employers love it when talented students get so deep into particular subjects, that they start building their own projects to further flesh out their ideas and test their skills.
âIn this project I have used a pre-trained ResNet50 network, removed its classifier layers so it becomes a feature extractor, and then added the YOLO classifier layer instead (randomly initialized). I then trained the network on Udacityâs CrowdAI dataset to detect cars in video frames.â
Enrique used VGG-16 to create a semantic segmentation network for the Advanced Deep Learning project in the Nanodegree program. He trained that network using the KITTI dataset, and then applied the network to scenes he recorded driving in Mexico. Check out his YouTube videos!
âThe original FCN-8s was trained in stages. The authors later uploaded a version that was trained all at once to their GitHub repo. The version in the GitHub repo has one important difference: The outputs of pooling layers 3 and 4 are scaled before they are fed into the 1×1 convolutions. As a result, some students have found that the model learns much better with the scaling layers included. The model may not converge substantially faster, but may reach a higher IoU and accuracy.â
Moataz built a vehicle detection pipeline combining histogram of oriented gradients, support vector machines, and sliding window search. I particularly like the heatmap he employs to reduce false positives in vehicle detection. This is a great example of going beyond the steps outlined in the Nanodegree program, to build a truly standout project.
âNow given the simplicity of the SVM model, we expect some detections to be false positives. In order to filter out these incorrect detections, one approach is to threshold our positive windows such that we only pick areas where more than one window overlap. In essence we are generating a heatmap of the positive windows.â
Udacity practices project-based learning, which means all of our students in all of our Nanodegree programs build projects like these. This approach enables you to learn practical skills, and to build a dynamic portfolio populated with completed projects that clearly showcase your new skills and experience.