Level 3: Mercedes-Benz EQS Flagship Sedan

Mercedes-Benz recently launched an online video series called, “Meet Mercedes Digital.” This first episode featured CEO Ola Kallenius, who briefly teased the launch of the Mercedes-Benz EQS sedan in the second half of 2020.

“This is a special year for us. It’s the year where we launch of flagship car, the S-Class. That only comes around every so often…It’s happening in the second half of the year and we’re quite excited about it.”
Ola Kallenius

The EQS is a futuristic luxury vehicle that should be a big shot in the arm for Daimler, the parent company of Mercedes-Benz.

They could use it, too. Like most automotive companies, Daimler has been hit hard by COVID-19, with the stock price down nearly 50% this year.

The EQS will be all-electric, all-wheel drive, with a top speed of “> 200 km/h” (125mph).

Most exciting to me, the vehicle will feature Level 3 autonomy. Mercedes doesn’t dance around this term, either. Right in the middle of the vehicle overview, they state:

“The Vision EQS show car supports the driver with highly-automated driving at Level 3, e.g. on longer motorway journeys. Thanks to the modular sensor systems, the level of autonomy can be extended up to fully-automated driving in the future.”

Well, maybe they dance around it a little by writing about the “Vision EQS show car”, instead of the 2021 production EQS. But that is a bold and refreshing statement.

Given Audi’s recent step back from Level 3 technology, due to liability concerns, it will be interesting to see whether Level 3 will be available at launch this fall.

I’m excited to get behind the wheel and take my hands off.

NVIDIA DRIVE Labs

DRIVE Labs is a really nice series of lessons about NVIDIA’s deep learning approach to autonomous vehicle development. They have about twenty short videos, each accompanied by a longer blog post and dedicated to specific aspect of self-driving.

The videos are hosted by Neda Cvijetic, NVIDIA’s Sr. Manager of Autonomous Vehicles.

I particularly like this video on path prediction, which is an area of autonomous technology that really fascinates me.

NVIDIA is most famous for producing graphical processing units, which are useful for both video games and deep learning. As such, NVIDIA really specializes in applying neural networks to autonomous vehicle challenges.

One of the best developments around self-driving cars in the last few years is how open companies have become in sharing their technology, or at least the result of what their software can do. It’s a lot of fun to watch.

Test In The City Or In The Suburbs?

In Forbes.com today, I wrote about the trade-offs between testing autonomous vehicles in urban versus suburban environments.

Chinese startup WeRide recently shared that, by its measurements, testing in Guangzhou, China, is thirty times more efficient than testing in Silicon Valley.

“The comparison between Guangzhou and Silicon Valley is pertinent to other self-driving operations, which have to consider where to test. Many self-driving car companies, including Waymo, have focused their operations on relatively favorable geofenced locations, such as Phoenix, Las Vegas, and Silicon Valley. In these areas, a combination of sunny weather, wide streets, and good infrastructure helps the programs progress.”

Lots more in the full post.

John Deere Embraces Technology

After listening to Nancy Post, Director of Deere’s Intelligent Systems Group, on The Autonocast, I pored over yesterday’s discussion of connectivity and precision agriculture on Deere’s Q2 2020 earnings call.

I pulled together what I learned in a piece on Forbes.com, “Connectivity Shines For John Deere.”

“Whereas Deere’s Q1 2020 earnings call highlighted the value of precision agriculture, this quarter the emphasis shifted subtly to connectivity. Precision agriculture improves crop yields, but connectivity allows both farmers and Deere to manage operations remotely, without having to travel or risk COVID-19 exposure.”

Lots more in the post, of course.

Simulators: They Just Get Better

In 2016, when I was starting to build Udacity’s Self-Driving Car Engineer Nanodegree Program, it was so hard to find a good vehicle simulator to use. And the simulators that did exist had really bad graphics. They were like 1980s video games.

We wound up programming our own simulators with the Unity gaming engine, just because we didn’t have any other options.

Fast-forward to 2020 and there are so many amazing, photo-realistic simulators on the market.

This Cruise video shows their simulator. I first started watching while only half paying attention. It wasn’t until halfway through the video that I realized I was watching a simulator.

Amazing.

Form Factors

Huawei and Neolix

Most autonomous vehicles are being developed by adding automation to pre-existing platforms.

That’s a bit like the original as the original horseless carriages.

One class of vehicle, however, that seems to be adapting its own form factors are street-legal delivery vehicles.

Compare the Huawei-Neolix design to Nuro.

A few things pop out:

The vehicles both look like they could drive equally well forward or backward, although Nuro’s vehicle has a clear back bumper.

Neither vehicle looks like it could drive side-to-side. The steering is nonholonomic, as the mechanical engineers would say.

Both vehicles have front and rear doors.

Both vehicles appear to have compute and drivetrain at stowed underneath the cargo compartments.

I wonder how close this look is to what we will see in the future.

Keeping Up

It can be hard keeping up with all the different companies working on autonomous vehicles!

I recently came across two lists of autonomous vehicle companies that I found helpful: “The State of the Self-Driving Car Race 2020” (Bloomberg) and “Factbox: Investors Pour Billions Into Automated Delivery Startups” (New York Times).

The Bloomberg article summarizes the larger, better-funded efforts, whereas the Times covers fundraising by smaller startups.

Between the two of them, how many of these companies are you keeping up with?

Teleoperations From Home

Kirsten Korosec has a story out in TechCrunch about the partnership between Postmates and Phantom Auto to teleoperate Serve, the small autonomous delivery vehicle that Postmates has launched.

Postmates teleoperations staff is now working from home, as are so many office workers during the COVID-19 pandemic. Postmates has provisioned its teleoperations staff with the necessary equipment to remotely operate vehicles from home.

Postmates says that by moving this job to a work-from-home setup, it’s opened the role to many more possible operators.

The interesting question, for me, is whether Postmates and Phantom Auto can make this setup economical enough for massive scale.

One of the huge advantages of that Tesla is reaping is the ability to use its vehicle owners as free data labelers.

Teleoperators can be free data labelers, as well. If Postmates and Phantom can make the teleoperators economical at scale, that would be a huge data advantage.

Graph Neural Networks

A Waymo blog post caught my eye recently, “VectorNet: Predicting behavior to help the Waymo Driver make better decisions.”

The blog post describes how Waymo uses deep learning to tackle the challenging problem of predicting the future. Specifically, Waymo vehicles need to predict what everyone else on the road is going to do.

As Mercedes-Benz engineers teach in Udacity’s Self-Driving Car Engineer Nanodegree Program, approaches to this problem tend to be either model-based or data-driven.

A model-based approach relies on our knowledge (“model”) of how actors behave. A car turning left through an intersection is likely to continue turning left, rather than come to a complete stop, or reverse, or switch to a right-turn.

A data-driven approach uses machine learning to process data from real world-observations and apply the resulting model to new scenarios.

VectorNet is a data-driven approach takes relies heavily on the semantic information from its high-definition maps. Waymo converts semantic information — turn lanes, stop lines, intersections — into vectors, and then feeds those vectors into a hierarchical graph neural network.

I’m a bit out of touch with the state-of-the-art in deep learning, so I followed a link from Waymo down a rabbit hole. First I read “An Illustrated Guide to Graph Neural Networks,” by a Singaporean undergrad named Rishabh Anand.

That article led me to an hour-long lecture on GNNs by Islem Rekik at Istanbul Technical University.

It was a longer rabbit hole than I anticipated, but this talk was just right for me. It has a quick fifteen minute review of CNNs, followed by a quick fifteen minute review of graph theory. About thirty-minutes in she does a really nice job covering the fundamentals of graph neural networks and how they allow us to feed structured data from a graph into a neural network.

Now that I have a bit of an understanding of GNNs, I’ll need to pop all the way back up to the Waymo blog post and follow it to their academic paper, “VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation.”

The Waymo team is scheduled to present that paper at CVPR 2020 next month.

Brian Salesky, CEO

Wired has a recent and very flattering profile of Brian Salesky, founder and CEO of Argo, Ford’s self-driving car venture.

The piece has more information than I’ve read elsewhere about the early history of the Google Self-Driving Car Project, now known as Waymo. There’s also a good description about the friendship and rivalry between Salesky and Aurora CEO Chris Urmson.

Recommended.