Tencent, the Chinese technology giant behind WeChat, has announced plans for a “Net City” on a two square kilometer portion of their campus in Shenzen.
The Tencent announcement notes that, “A “green corridor” for buses, bicycles and autonomous vehicles will be the backbone of the district, running down its length.” They’ve hired a US architecture firm to design it all.
The zone will “accommodate” 80,000 people, although it’s not clear if those are residents or Tencent employees who will actually live off-site.
The report is pretty light on details, and even notes that there have been a few other announcements like this in Japan and North America, from Google no less. Google’s Sidewalk Labs just announced they will not proceed with their “smart city” in a Toronto neighborhood. The culprit was an inability to overcome a combination of urban regulatory burden, NIMBYism, and data privacy concerns.
To me, the North American contrast is the most interesting aspect to the Tencent project. This could either be read as a Chinese tech giant simply running a few years behind an American tech giant, only to give up in a few years itself. Or it could prove the point that China is capable of major infrastructure projects that just aren’t possible in North America anymore.
Every month Robotics Business Review compiles a list of private financing deals for robotics companies.
In April, as the world shut down for COVID-19, funding basically dried up.
“Robotics Business Reviewtracked about 26 transactions worth a total of more than $600 million last month, compared with 29 deals worth $2.7 billion in March 2020 and 30 transactions worth $6.5 billion in April 2019.”
Many of the April transactions that did occur were in China.
May, however, showed a meaningful uptick. May 2020 numbers were comparable to where they were a year ago, and at least in the same order of magnitude as the March 2020 figures.
The May figures were led by huge funding rounds for Waymo and Didi. The rest of the May transactions totaled only $250 million.
For comparison, the May 2019 figures were even more concentrated, with the bulk of the month’s investments driven by a huge fundraising round for Cruise Automation.
I’m not quite ready to declare a return to normalcy yet, but it’s a big step in the right direction.
Elon Musk famously tweeted that Tesla vehicles will be appreciating assets, a first for automobiles, if that comes to pass. The logic stems from another controversial Musk claim, that Teslas will eventually become robotaxis, generating passive income for their owners.
Recently, Electrek and other outlets wrote Tesla has taken a baby step toward the robotaxi vision. Nothing self-driving, much more pedestrian (excuse the pun) than that.
Tesla has created an “Add Drive” feature in its app.
Tesla does not yet appear to be advertising this feature, and I don’t own a Tesla, so I can’t confirm for myself. But apparently Tesla owners can now give access to their car to anybody, just by adding an email address. No key necessary, just the Tesla app and a confirmed email address.
Even if the robotaxis are a long time coming, you could imagine this might make it a lot easier for Tesla owners to rent their vehicles to other drivers through sites like Turo.
Today Udacity launched a free, four-week Intro to Cloud Computing course that I have been working on for the last few months with Ami Malhoof. It’s a great course and if you are new to cloud computing, you should take it!
Ami had a really ambitious vision for this course, and I think it turned out really well. Over the course of five lessons, students:
Learn the basics of cloud computing
Boot a virtual machine locally
Create Identity and Access Management (IAM) roles and policies
Upload files to Amazon Web Services (AWS) Simple Storage Service (S3)
Launch an AWS Elastic Cloud Compute virtual server instance
Construct an AWS Lambda serverless function
Configure AWS API Gateway, Lambda, and S3 together to host a website
That’s a lot to accomplish in just a few weeks! And it’s free!
Mobileye, which now belongs to Intel, recently published a video of a 40-minute autonomous drive through urban Jerusalem. Mobileye is based in Jerusalem, and its streets may be among the most challenging that self-driving cars currently handle.
The recording clocks in at about 26 minutes, shorter than the actual drive, because parts of the drive are played back at 2x speed. I annotated the video with my thoughts.
[1:05] Mobileye’s sensing display is pretty sparse, but does a nice job focusing on the important items.
[1:05] I am a little surprised that Mobileye’s in-cabin display set-up is clearly aftermarket. I guess that shouldn’t surprise me, but it’s a reminder that Mobileye is a supplier, not an OEM.
[1:50] Merges from a standing start are tough. Nice work.
[3:10] Overtaking the parked truck is very safe and delibrate, but I could imagine eventually that will need to happen faster.
[4:25] The pedestrian is detected well before the crosswalk. Awesome.
[5:15] “We use environmental cues to determine that these cars are parked, and not just stuck in traffic.” I wonder what those cues are? The map?
[6:20] This drive is camera-only, but Mobileye says they are building an entirely separate, redundant sensing stack that is radar plus lidar. They want to achieve full self-driving with each subsystem independently. They call this “true redundancy.” Interesting choice to build two separate stacks, divided by modality, as opposed to equip the vehicle with two independent sensor suites with all modalities.
[8:15] Mobileye’s Road Experience Management technology pulls anonymized sensor data from BMW, VW, Nissan, and SAIC vehicles. Mobileye fuses this data into is mapping system, to keep its maps up to date around the world. This is part of the dream of ADAS — that you can get much more data from a production fleet of human drivers than a fleet of test vehicles. “We are basically leveraging Mobileye’s strong ADAS position to build and maintain a near-real-time HD map.” I wonder exactly what data they are pulling, and whether the manufacturers will agree to this in the long-run.
[10:17] This route includes some very narrow Jerusalem streets. This one, at least, is totally straight. I’m not sure this “proves” AVs can operate in places like India, but this is certainly a more challenging environment than, say, Phoenix.
[11:10] The unprotected left turn felt a little tense, but basically okay.
[12:20] Nice job detecting a pedestrian dragging a forklift. This scenario is reminiscent of the situation that led to Uber ATG’s fatal collision with Elaine Herzberg in Arizona. Mobileye seems to have no problem with this here.
[13:15] Really interesting and successful “negotiation” to merge around a stopped vehicle.
[15:00] The human driver takes over to pull over in a bus stop zone, so that the drone operator (riding shotgun) can land the drone and change the battery. I am surprised the human driver had to take over here. Compared to a lot of autonomous maneuvers in this video, “pull over,” seems pretty basic (and necessary).
[15:00] This stop highlights that the human driver does not seem to provide any input to the AV during the whole drive. The route appears pre-programmed from start to finish. I wonder how strong Mobileye’s in-vehicle UX is.
[15:30] This shot reminds me how impressive drone operators are. You take it for granted, but this drone operator is sitting in the passenger seat of a moving vehicle. He’s keeping a drone, which he can’t see, in place directly above the car, at hundreds of feet off the ground, for forty minutes!
[17:15] Roundabouts are tough for Americans. I’d pay a self-driving car to handle roundabouts for me.
[17:55] Even self-driving cars want to change to the next lane if it looks faster!
[19:05] Super-narrow street with lots of cars. I’d be nervous driving here. Impressive, especially for a camera-only system! Localization typically relies on lidar. Mobileye can clearly localize effectively with just cameras.
[19:35] A driver exits a parked car to wave the AV around. The AV doesn’t seem to “understand” the wave, but once the driver gets out of the way, it figures out to pass the parked car.
[21:27] That was a challenging unprotected left turn. I’m impressed again.
[22:45] Interesting that the AV does not yield to the moped (I think it’s a moped) in the crosswalk. The system seems to recognize the vehicle as a moped, but the moped is trying to use the crosswalk like a pedestrian. Tricky situation.
[24:20] The sensing UI seems to recognize a leading car quite far ahead — so far ahead that it doesn’t appear in the drone shot. I wonder what the system’s range is.
[25:25] Once again, the safety driver takes control to pull over and end the ride. I’m puzzled why that wasn’t pre-programmed, like the rest of the ride.
Overall, this was a lot of fun to watch and a really impressive performance by Mobileye. Jerusalem seems like a tough place to drive!
“Interestingly, the drop in sales only resulted in a temporary inventory backlog. While Manheim estimates that retail used car inventory in April was 161% higher than usual, May used car inventory has dropped 25% below average. The supply reduction may be due to fewer buyers trading in older vehicles, as new vehicle sales followed a similar trajectory from April to May.”
It’s been a tough 18 months for self-driving cars. The enthusiasm (and cash) that poured into the industry from 2016 to 2018 has dampened as everyone realizes Level 4 driverless robotaxis are not immediately around the corner.
But companies are still making progress and hiring.
VentureBeat reports that Aurora now boasts 500 employees (including interns!). CEO Chris Urmson says, “With the industry shakeup right now, there’s a lot of new talent on the market, an opportunity we intend to take full advantage of.”
Cruise has laid off employees recently, as well, all though supposedly this was in the interest of focusing on engineering, which would align with the Zoox hires.
Reuters reports that Cruise founder and CTO Kyle Vogt sent quite an email to Zoox engineers:
“Cruise is willing to recognize the full value of the rewards you’ve earned at Zoox — something that is very unlikely to occur via an acquisition in this environment.”
On my way down one of those infamous web-browsing rabbit holes, I stumbled upon an article from the Fall 1988 issue of MIT’s Sloan Management Review, “Triumph of the Lean Production System,” by one John F. Krafcik.
“Really?” I thought to myself. “That John Krafcik?” How many John Krafcik’s can there be in the automotive industry?
Indeed, the article appears to be from the current CEO of Waymo, back when he was in his twenties, a graduate student at MIT.
Krafcik’s first job out of college, before he wrote this article, was at GM’s NUMMI plant in Silicon Valley. The article kind of reads like Krafcik maybe doesn’t think so much of GM — it’s the only company he criticizes by name. (Keep in mind this is 1988, so no aspersions on present leadership.)
Krafcik seems to revere Henry Ford’s production system, and thinks that Japanese lean production is the natural evolution of that system.
Krafcik found that the location of a plant didn’t matter as much as the location of the company’s headquarters. Japenese plants in America were more efficient than American plants in America, and almost as efficient as Japanese plants in Japan.
Krafcik writes that European companies have a strong Not Invented Here bias that has led them to reject lean production, to their detriment.
Product design has a big impact on plant efficiency.
Plant workers should be empowered to improve processes, not just blindly follow instructions.
There’s not really a tradeoff between quality and productivity. High-quality plants can dispose of most inspection and rework processes, which ultimately makes them more productive.
Technology and robots don’t really seem to help make plants more effective.
That last point seems particularly interesting and ironic, given Krafcik’s current role.
NVIDIA’s Xavier system on a chip (SoC) for self-driving cars recently passed TÜVISO 26262 functional safety testing. Reading NVIDIA’s blog post on this achievement, I was struck by just how many specialized processors Xavier has, many of which were new to me.
Also, did you know there exists a site called Wikichip?
GPU Of course an NVIDIA SoC will have a GPU, in this case a Volta GPU. The Volta GPU on the Xavier is optimized for inference. That means the neural network is probably going to be trained somewhere else and then loaded onto this platform when it’s ready for production deployment.
Wikichip lists this GPU at 22.6 tera-operations per second (TOPS). For comparison, Tesla Motor’s purpose-built self-driving chip boasts 36 TOPS. I confess I don’t know enough about just how far to the redline these chips go to understand whether 23 TOPS vs. 36 TOPS is basically the same thing or wildly different.
CPU Although NVIDIA is a GPU company, the Xavier has a CPU. The CPU has 8 Carmel cores. I assume it’s fast.
VPU Xavier includes a vision processing unit (VPU), which makes sense for a SoC designed for lots of cameras.
NVIDIA sometimes calls this a “Stereo/Optical Flow accelerator.”Optical flow is a machine learning technique for inferring data (distance, velocity) from stereo cameras. I assume more generally the goal is to accelerate machine learning algorithms on sequential frames of video.
ISP I had not before heard of image signal processors. Like a VPU, an ISP is designed to accelerate the performance of algorithms on camera data. ISPs seem to focus on individual high-resolution frames, probably for classification tasks on things like signs.
PVA Vision is clearly a strength of the Xavier. The programmable vision accelerator is an NVIDIA proprietary technology. The best documentation I could find is a patent that seems to focus on collapsing multiple loops into a single loop in order to accelerate vision calculations.
The “programmable” qualifier presumably means that firmware engineers can customize this chip to their specific needs.
DLA The deep learning accelerator is an open-source architecture NVIDIA has released to create accelerators for neural network inference. It’s really cool that NVIDIA has open-sourced this technology.
As with the PVA, the DLA appears to be programmable with Verilog, so that customers can adapt the firmware to meet their needs.
Most likely a goal of the DLA is to provide acceleration of lidar and other data that may not be optimized for the other vision-optimized chips on the Xavier.
That is a lot of processing power and specialization on one SoC!
Lots of self-driving companies are back to testing, in limited capacity, in the US. Right now, they’re typically testing delivering goods — not people — to vulnerable communities.
As an aside, Jewel Li from AutoX mentioned on a recent Autonocast episode that Chinese self-driving companies are totally back to normal, testing at full capacity, and working in the office.
But here in the US, lockdowns are still mostly in effect and self-driving companies are trying to both do the right thing and get back out on the road by becoming delivery services.
I imagine this plays especially well for self-driving companies that were founded from the start as delivery services, not robotaxis. First and foremost in that list is Nuro, which announced a partnership with CVS to deliver prescriptions.
Interestingly, “As with all our pilots, we will begin service with our autonomous Prius fleet to make deliveries, before introducing deliveries with R2, our custom-built delivery bot.”
I wonder what Nuro’s stages are, moving from a Prius with (presumably) a safety operator, to a driverless R2 (possibly with a safety operator trailing in another vehicle?), to a driverless R2 with no Nuro staff in the vicinity. I did a quick scan of Nuro’s blog and didn’t see anything, but I haven’t followed them closely on this particular issue.
On the other end of the spectrum, robotaxis face the challenge of providing a safe vehicular environment for many, many passengers to share (albeit at different times).
Early in the COVID crisis my old boss, Oliver Cameron, who is now co-founder and CEO of Voyage, tweeted:
Any startups building in-vehicle disinfectant technology?
Think of a virus-killing mist that’s released after a customer exits a (driverless) ride-hailing vehicle 👀
Oliver is so good at Twitter. Things that normal people like me would spend days and even real dollars on, Oliver puts on Twitter and gets answers.
You can read in the Twitter thread that he got a lot of suggestions. We’ll see if any of them pan out. The immediate upshot seemed to be captured by this GIF he subsequently posted.
In the medium-term, a big question for robotaxi companies will be whether this becomes mandatory, or whether COVID diminishes as a real public health concern, leaving the world the way it was in mid-2019.
If COVID doesn’t go away soon, a lot of robotaxi companies might be tempted to become delivery companies.