I am super excited that today Udacity launched the C++ Nanodegree Program! My team and I have been building this for the last several months and we can’t wait to share it with students. 💻
There are so many jobs available for C++ engineers. 😄
One of my favorite parts of building this program was the opportunity to talk with C++ creator Bjarne Stroustrup. Bjarne cares a lot about teaching C++ well, and he was incredibly generous with his time and advice on the curriculum. He also graciously sat for many videos that appear in the program, in which he explains how different features of the language work, why those features came about, and the right way to use them.
The Nanodegree Program is composed of five courses, each lasting one month:
Foundations: Learn the basics of “modern” C++ (C++17!) syntax and operators. You’ll finish this course by building a real-world route planner using OpenStreetMap data!
Object-Oriented Programming: Design programs using object-oriented C++ features, including classes and templates. The final project for this course is to implement an htop-like process manager for Linux (we provide a full Linux desktop through your browser!).
Memory Management: Grasp the power of C++ by learning how to manage resources on the stack and the free store. In particular, learn how to leverage Resource Acquisition Is Initialization (RAII) principles to scope your resources and handle them automatically!
Concurrency: Parallel processing has been a key driver of the adoption of C++ into real-time and embedded systems, like self-driving cars. In this course, you’ll exploit parallel processing to accelerate your programs, starting with parallel implementations of standard library algorithms and moving all the way to thread synchronization and communication.
C++ is such an important skill, and I think this course teaches “modern” C++ in a really intuitive and hands-on way, just like all Udacity courses.
Check out the Nanodegree Program and enroll today!
This strikes me as so surprising that I feel like I have to preface it by stating that I’m pretty sure it’s not an April Fool’s joke.
Tencent, the Chinese Internet giant, has a division called the Keen Security Lab, which focuses on “cutting-edge security research.” Their most recent project has been to hack Tesla vehicles, which they demonstrate in this video:
The hacks have made some press for demonstrating the potential for adversarial attacks —basically, tricking a neural network. Tencent researchers ultimately were able to place a few stickers in an intersection and trick the car into switching lanes into (potentially) oncoming traffic.
I am skeptical of adversarial attacks, at least involving self-driving cars. But that strikes me as ignoring the most interesting part of this.
In order to get this far, the researchers had to hack Tesla Autopilot, and in so doing, they appear to have discovered and published a surprising amount about how Autopilot works.
Want to know the architecture of Tesla computer vision neural network? It’s published on page 29 of the paper:
The paper states that, “for many major tasks, Tesla uses a single large neural network with many outputs, and lane detection is one of those tasks.” It seems like if you spent a little while investigating what was going on in that network, you might be able to figure out a lot about how Autopilot works.
The paper is forty pages long, and the English is good but not perfect, so it takes a little while to read. I confess I’ll need to spend more time with it to really understand the ins and outs.
But there are some more good nuggets:
“Both APE and APE-B are Tegra chips, same as Nvidia’s PX2. LB (lizard brain), is an Infineon Aurix chip. Besides, there is a Parker GPU (GP106) from Nvidia connected to APE. Software image running on APE and APE-B are basically the same, while LB has its own firmware.”
“ (By the way, we noticed a camera called “selfie” here, but this camera does not exist on the Tesla Model S.)” [DS: Driver monitoring system? On what model? Supposedly they are using a Model S 75 for all of this research.]
“Those post processors are responsible for several jobs including tracking cars, objects and lanes, making maps of surrounding environments, and determining rainfall amount. To our surprise, most of those jobs are finished within only one perception neural network.”
“Tesla uses a large class for managing those functions(about “large”: the struct itself is nearly 900MB in v17.26.76, and over 400MB in v2018.6.1, not including chunks it allocates on the heap). Parsing each member out is not an easy job, especially for a stripped binary, filled with large class and Boost types. Therefore in this article, we won’t introduce a detailed member list of each class, and we also do not promise that our reverse engineering result here is representing the original design of Tesla.”
“Finally, we figured out an effective solution: dynamically inject malicious code into cantx service and hook the “DasSteeringControlMessageEmitter::finalize_message()” function of the cantx service to reuse the DSCM’s timestamp and counter to manipulate the DSCM with any value of steering angle.”
“rather than using a simple, single sensor to detect rain or moisture, Tesla decided to use its second-generation Autopilot suite of cameras and artificial intelligence network to determine whether & when the wipers should be turned on.”
“We found that in order to optimize the efficiency of the neural network, Tesla converts the 32-bit floating point operations to the 8-bit integer calculations, and a part of the layers are private implementation [DS: emphasis mine], which were all compiled in the “.cubin” file. Therefore the entire neural network is regarded as a black box to us.”
“The controller itself is kind of complex. It will receive tracking info, locate the car’s position in its own HD-Map, and provide control instructions according to surrounding situations. Most of the code in controller is not related to computer vision and only strategy-based choices.”
If this is all true, then the team reverse-engineered Tesla’s entire software stack on the way to implementing an adversarial neural network attack. The reverse engineering strikes me as the amazing part.
As a Virginian, it’s super-exciting for me to learn that Daimler Trucks has purchased Torc Robotics (technically, Daimler purchased a controlling interest, which is a distinction that has not been fully explained).
Torc is based out of Blacksburg, Virginia, which is a tiny town that exists only as the site of Virginia Tech. As you might expect, Torc is a Virginia Tech spin-out, dating all the way back to Tech’s overlooked 3rd place finish in the 2007 DARPA Urban Challenge.
I’m not entirely sure how Torc has survived from 2007 until now. Coming from Virginia, I expect the answer is “federal contracts”, or more specifically, “military contracts”.
But somehow Torc managed to keep the lights on for over a decade until the self-driving car boom of 2017–present. And now they are an important part of the autonomous vehicle strategy for the largest truck manufacturer in the world.
This is a free, short synopsis of what robotics is, what jobs are available, what skills are necessary to get those jobs, how much those jobs pay, and what companies are hiring.
Cruise, already staffed at about 1,000 people, is looking to double in size, primarily by hiring engineers.
Cruise probably has driven more miles autonomously than any company except Waymo — maybe into the mid-single-digit millions of miles. Waymo has somewhere between 15 and 20 million miles.
Cruise reportedly has ~1,000 staff and is looking to double to 2,000. Similarly, Waymo has ~1,000 employees.
Cruise has been hoovering up billions of dollars in investment from companies like SoftBank. Part of the SoftBank playbook involves growing so big, so fast, that nobody wants to challenge you.
The autonomous vehicle industry is constrained by a number of factors besides just cash: hardware, safety, engineers. But cash solves a lot of problems, so hold onto your seat.
Meya used the scholarship to build a portfolio that landed her a software engineering role at Workday. Ana applied her computer vision skills to a project she’s developing for a Fulbright Scholarship. And Hirza used her new skills to transition from a test engineer role to a software development engineer role.
Udacity student stories are great, and these are especially moving. Check it out.
I was impressed that Chuck Price from TuSimple mentioned they have 50 trucks on the road and are already hauling real freight between Arizona and Texas. Sounds like California is coming soon.
Tomorrow (Sunday), I will be speaking on the AI-AI-Oh! panel, about training data for machine learning. You should come!
We are going head-to-head for audience size with the CTO of Walmart, who is presenting about shopping in the conference room next door, and I want to win.
This is my first time at SXSW and goodness is it an overwhelming event. There must be thousands of events over 10 days, and they’re always adding new topics. Somehow I missed that Malcolm Gladwell is interviewing Chris Urmson right now!
I present tomorrow, and I’ll be here for a few more days after that, so let me know if you’d like to say hello!
The weird thing is that so many teams evaluate this landscape and decide to build their own solutions. Waymo’s Carcraft is the most famous, but we made the same decision at Udacity a few years ago. We evaluated the simulation solutions on the market in the fall of 2016, found that none of them quite met our needs, and decided to build our own simulators in Unity.
That struck me as pretty interesting, too. We’ve seen lots of partnerships in the traditional automotive industry, as car makers position themselves to compete with tech and ridesharing companies. Announcements from the tech and ridesharing space, however, tend to feel more like supplier relationships than partnerships. This blog post is has the feel of an Uber-Voyage-Applied Intuition partnership.
I was at work late tonight and missed my train, so I splurged for a ride home. My driver was pretty talkative, and told me he had done 8,900 rides and tracked the resulting data meticulously.
He volunteered a number of observations that struck me.
Demographics. His most common passenger is a solo female rider. He had a number of hypotheses for this, but none of them struck me as obviously correct. One hypothesis that might be incorrect in his particular case, but correct more generally, is the urban gender divide. Ridesharing is primarily an urban phenomenon, and my intuition is that women outnumber men in urban area (I’m having a surprisingly hard time finding a link that discusses this, though). San Francisco has a basically equal gender distribution, though.
Phones. Clear age divide in riders who talk to the driver. People under 25 look at their phones the whole ride.
Tipping. Older riders are more likely to tip. I think of this largely as a form of self-imposed price discrimination. Shared ride customers are less likely to tip.
Duration. Rides in San Jose (a less dense city) tend to be much longer.
Lost and Found. Charging your phone in the car increases your likelihood of leaving it behind by 5–10x, according to this driver.
I wonder which of these observations would impact self-driving cars. The male-female divide caught my attention, especially because of perceived safety. As an engineer, I think mostly about the safety of the virtual driver system, but female passengers in particular might also consider the safety issues related to entering a stranger’s car.
Tipping seems important for a few reasons. It’s a form of price discrimination that will probably vanish with autonomous driving systems. It’s also a kind of Coasean division, like franchising. This driver seemed really concerned with providing services that would generate tips in his car, in a way that I imagine would be hard to scale to a whole fleet.