CNBC reports that Apple is in discussions with “at least four companies as possible suppliers for next-generation lidar sensors in self-driving cars.”
The report also suggests that, “The iPhone maker is setting a high bar with demands for a ‘revolutionary design.’…In addition to evaluating potential outside suppliers, Apple is believed to have its own internal lidar sensor under development.”
If anything, Apple’s hardware design strengths should make this an even easier task for Apple than for Waymo, so it seems totally plausible Apple could pull this off.
The question is: to what end?
I know very little about why Waymo started designing its own lidar, but I know they started building self-driving cars with the Velodyne HDL-64 “chicken bucket” model.
My guess is that Google began developing their own lidar several years ago not because they needed a much better sensor, but rather because they couldn’t get enough sensors of any type.
Several years ago, when Google would have begun developing its lidar program, Velodyne was one of the only lidar manufacturers in the world. And even Velodyne was severely constrained in the number of units it could produce. There was a period a few years ago when the waiting list to buy a Velodyne lidar unit was months long.
In that world, it would have made a lot of sense for Google to begin developing its own lidar program. That would’ve reduced on possible bottleneck for building self-driving cars at scale.
Fast-forward to 2019. Velodyne has taken massive investment capital to build lidar factories, and there are upwards of sixty lidar companies (mostly startups) developing sensors. Today, there isn’t the same need or urgency to develop custom lidar units. In fact, all of those lidar startups are basically doing that on their own.
So it’s not totally clear to me what Apple would gain from creating their own lidar program.
Volkswagen announced it is testing (present tense) self-driving cars in Hamburg. The press release details that there are five self-driving e-Golfs testing on a three kilometer stretch of road in Hamburg.
This would be a minor announcement in the US, where a number of different companies are testing fleets of this size (or bigger) within geofences of this size (or bigger). But surprisingly little testing has happened on public roads in Germany, so it is terrific to see Volkswagen take this step. This might actually be the first major test I can recall in that country.
That said, the press release is a little coy on the exact setup. While the scenario is described as “real driving conditions”, the test is also said to be taking place in a special autonomous vehicle “test bed” that is still under construction.
My sense is that this test is probably not on truly “public” roads that any regular driver might pass through. That said, it seems like a good precursor to that kind of test.
This is the first time Volkswagen has begun to test automated driving to Level 4 at real driving conditions in a major German city. From now, a fleet of five e-Golf, equipped with laser scanners, cameras, ultrasonic sensors and radars, will drive on a three-kilometer section of the digital test bed for automated and connected driving in the Hanseatic city.
The press release does have some interesting and specific details about the vehicles themselves:
“The e-Golf configured by Volkswagen Group Research have eleven laser scanners, seven radars and 14 cameras. Up to 5 gigabytes of data are communicated per minute during the regular test drives, each of which lasts several hours. Computing power equivalent to some 15 laptops is tucked away in the trunk of the e-Golf.”
This strikes me as so surprising that I feel like I have to preface it by stating that I’m pretty sure it’s not an April Fool’s joke.
Tencent, the Chinese Internet giant, has a division called the Keen Security Lab, which focuses on “cutting-edge security research.” Their most recent project has been to hack Tesla vehicles, which they demonstrate in this video:
The hacks have made some press for demonstrating the potential for adversarial attacks —basically, tricking a neural network. Tencent researchers ultimately were able to place a few stickers in an intersection and trick the car into switching lanes into (potentially) oncoming traffic.
I am skeptical of adversarial attacks, at least involving self-driving cars. But that strikes me as ignoring the most interesting part of this.
In order to get this far, the researchers had to hack Tesla Autopilot, and in so doing, they appear to have discovered and published a surprising amount about how Autopilot works.
Want to know the architecture of Tesla computer vision neural network? It’s published on page 29 of the paper:
The paper states that, “for many major tasks, Tesla uses a single large neural network with many outputs, and lane detection is one of those tasks.” It seems like if you spent a little while investigating what was going on in that network, you might be able to figure out a lot about how Autopilot works.
The paper is forty pages long, and the English is good but not perfect, so it takes a little while to read. I confess I’ll need to spend more time with it to really understand the ins and outs.
But there are some more good nuggets:
“Both APE and APE-B are Tegra chips, same as Nvidia’s PX2. LB (lizard brain), is an Infineon Aurix chip. Besides, there is a Parker GPU (GP106) from Nvidia connected to APE. Software image running on APE and APE-B are basically the same, while LB has its own firmware.”
“ (By the way, we noticed a camera called “selfie” here, but this camera does not exist on the Tesla Model S.)” [DS: Driver monitoring system? On what model? Supposedly they are using a Model S 75 for all of this research.]
“Those post processors are responsible for several jobs including tracking cars, objects and lanes, making maps of surrounding environments, and determining rainfall amount. To our surprise, most of those jobs are finished within only one perception neural network.”
“Tesla uses a large class for managing those functions(about “large”: the struct itself is nearly 900MB in v17.26.76, and over 400MB in v2018.6.1, not including chunks it allocates on the heap). Parsing each member out is not an easy job, especially for a stripped binary, filled with large class and Boost types. Therefore in this article, we won’t introduce a detailed member list of each class, and we also do not promise that our reverse engineering result here is representing the original design of Tesla.”
“Finally, we figured out an effective solution: dynamically inject malicious code into cantx service and hook the “DasSteeringControlMessageEmitter::finalize_message()” function of the cantx service to reuse the DSCM’s timestamp and counter to manipulate the DSCM with any value of steering angle.”
“rather than using a simple, single sensor to detect rain or moisture, Tesla decided to use its second-generation Autopilot suite of cameras and artificial intelligence network to determine whether & when the wipers should be turned on.”
“We found that in order to optimize the efficiency of the neural network, Tesla converts the 32-bit floating point operations to the 8-bit integer calculations, and a part of the layers are private implementation [DS: emphasis mine], which were all compiled in the “.cubin” file. Therefore the entire neural network is regarded as a black box to us.”
“The controller itself is kind of complex. It will receive tracking info, locate the car’s position in its own HD-Map, and provide control instructions according to surrounding situations. Most of the code in controller is not related to computer vision and only strategy-based choices.”
If this is all true, then the team reverse-engineered Tesla’s entire software stack on the way to implementing an adversarial neural network attack. The reverse engineering strikes me as the amazing part.
As a Virginian, it’s super-exciting for me to learn that Daimler Trucks has purchased Torc Robotics (technically, Daimler purchased a controlling interest, which is a distinction that has not been fully explained).
Torc is based out of Blacksburg, Virginia, which is a tiny town that exists only as the site of Virginia Tech. As you might expect, Torc is a Virginia Tech spin-out, dating all the way back to Tech’s overlooked 3rd place finish in the 2007 DARPA Urban Challenge.
I’m not entirely sure how Torc has survived from 2007 until now. Coming from Virginia, I expect the answer is “federal contracts”, or more specifically, “military contracts”.
But somehow Torc managed to keep the lights on for over a decade until the self-driving car boom of 2017–present. And now they are an important part of the autonomous vehicle strategy for the largest truck manufacturer in the world.
This is a free, short synopsis of what robotics is, what jobs are available, what skills are necessary to get those jobs, how much those jobs pay, and what companies are hiring.
Cruise, already staffed at about 1,000 people, is looking to double in size, primarily by hiring engineers.
Cruise probably has driven more miles autonomously than any company except Waymo — maybe into the mid-single-digit millions of miles. Waymo has somewhere between 15 and 20 million miles.
Cruise reportedly has ~1,000 staff and is looking to double to 2,000. Similarly, Waymo has ~1,000 employees.
Cruise has been hoovering up billions of dollars in investment from companies like SoftBank. Part of the SoftBank playbook involves growing so big, so fast, that nobody wants to challenge you.
The autonomous vehicle industry is constrained by a number of factors besides just cash: hardware, safety, engineers. But cash solves a lot of problems, so hold onto your seat.
Meya used the scholarship to build a portfolio that landed her a software engineering role at Workday. Ana applied her computer vision skills to a project she’s developing for a Fulbright Scholarship. And Hirza used her new skills to transition from a test engineer role to a software development engineer role.
Udacity student stories are great, and these are especially moving. Check it out.
I was impressed that Chuck Price from TuSimple mentioned they have 50 trucks on the road and are already hauling real freight between Arizona and Texas. Sounds like California is coming soon.
Tomorrow (Sunday), I will be speaking on the AI-AI-Oh! panel, about training data for machine learning. You should come!
We are going head-to-head for audience size with the CTO of Walmart, who is presenting about shopping in the conference room next door, and I want to win.
This is my first time at SXSW and goodness is it an overwhelming event. There must be thousands of events over 10 days, and they’re always adding new topics. Somehow I missed that Malcolm Gladwell is interviewing Chris Urmson right now!
I present tomorrow, and I’ll be here for a few more days after that, so let me know if you’d like to say hello!
The weird thing is that so many teams evaluate this landscape and decide to build their own solutions. Waymo’s Carcraft is the most famous, but we made the same decision at Udacity a few years ago. We evaluated the simulation solutions on the market in the fall of 2016, found that none of them quite met our needs, and decided to build our own simulators in Unity.
That struck me as pretty interesting, too. We’ve seen lots of partnerships in the traditional automotive industry, as car makers position themselves to compete with tech and ridesharing companies. Announcements from the tech and ridesharing space, however, tend to feel more like supplier relationships than partnerships. This blog post is has the feel of an Uber-Voyage-Applied Intuition partnership.