Earlier this week, John Deere acquired Bear Flag Robotics for $250 million. This is a solid exit for the agricultural autonomy startup founded in 2017.
A few points struck me independently about this one:
Agricultural autonomy is about labor cost savings, but it’s also about precision agriculture, which allows farmers to generate more revenue from the same land. More AV companies need to find business models that generate marginal revenue, in addition to (or in leiu of) reducing costs.
Bear Flag’s model was to retrofit existing agriculture implements, rather than build new ones from the ground up. This seems like a more viable business model for high-value industrial machines, rather than consumer cars.
Deere acquired Blue River Technology a few years ago to do something similar. Blue River is not mentioned in the Bear Flag announcement, despite that fact that both acquisitions are Silicon Valley startups. I wonder if they will be operated separately.
This is one of those insights that seems so obvious when it’s pointed out that I’m embarrassed I didn’t think of it earlier. Reuters has a good overview of the public policy, and TechCrunch has a good overview of the technology.
FastCompany recently published a feature on Joby Aviation’s decibel claims, and along the way the article provides an extensive overview of the key eVTOL startups.
The boom in lithium-ion battery technologyâfor electronics, power tools, and carsâprovided an alternative to the large, noisy piston engines in small planes or screeching turbines in helicopters. Those batteries drive electric motors that are essentially silent.
But propellers slicing through the air remain the biggest source of noise. And electric power enables a bunch of engineering tricks to quiet them down. Nerd alert: Appreciating those tricks will require understanding a few engineering concepts, including disk loading, tip speed, and torque.
The article highlights several a wide range of aspects to low-volume electric flight: batteries, distributed propulsion, disk load, offset propeller speeds, shorter blades, weight, elevation, distance, and high-torque custom electric motors.
The article quotes not only Joby, but also Volocopter, Whisper, Archer, Lillium, and Kitty Hawk.
That final startup was founded by my old boss, Sebastian Thrun, who’s quoted in this article comparing the noise generated by his Heaviside eVTOL model to, “a flying hair dryer.” I love Sebastian.
Also, this:
Based on his work for Uber, Moore estimates that even a small vertiport might need a volume of 30 to 60 flights per hour to run in the black. [NB: Mark Moore spent decades at NASA, then four years at Uber Elevate, and now is the founder of Whisper.]
Which leads to:
âOne of the great locations for vertiports is the top of a parking garage thatâs right next to a highway, where the background noise level is 65 to 70 [decibels] at a hundred feet,â says Moore. âBecause then youâre compatible with the neighborhood in terms of background noise level.â
Comma just wrapped up Comma Con, it’s developer and customer conference, in San Diego, no less đ
The big announcement is the Comma Three, a $2199 aftermarket product that provides high-level lane keeping and cruise control to a wide variety of vehicles.
I purchased the original Comma several years ago and tried it out on a variety of rental cars. The performance impressed me then, and I’m sure it’s terrific now. Comma boasts, “thousands of customers and millions of miles.”
They also promote that the device is “Easy to install,” and then link out to a 15-step process that takes about 30 minutes and includes steps such as, “Route the RJ45 cable to the OBD-II port” and “Let the mount cure.”
Comma’s technology is so good, and I remain amazed that they have chosen to remain independent and focus on hackers, rather than go big through OEM partnerships.
Much respect for these guys. They just quietly keep shipping (and making money off of) extremely complex shit. They might be the most complex startup I know of that actually ships a usable consumer product. đ„ https://t.co/o9tXXbAzMi
I’ve been a fan of Comma since 2016. Way back then, I sat in a garage with George Hotz, Comma’s founder, and had an informal job interview that was almost more like a conversation about end-to-end deep learning for self-driving cars. They sent me a programming project which I didn’t wind up completing, but that garage conversation inspired the behavioral cloning project in the Udacity Self-Driving Car Engineer Nanodegree Program, and George spoke to Udacity students at the event we hosted to launch the program.
I wish them much success, part of which I suspect will be conditional on the type of OEM partnerships they have thus far avoided.
Mobileye recently announced that it has expanded its autonomous vehicle testing to New York City. The Intel subsidiary released “an unedited 40-minute drive” of its camera-only vehicle traveling autonomously through the Big Apple.
Previously, I annotated a similar Mobileye drive through Jerusalem, so I did that again for New York.
My main takeaway is just how much time vehicles spend stopped at traffic lights in New York City. No wonder everyone takes the subway.
[0:35] At CES 2021, Mobileye emphasized how quickly they can spin up testing in a new city. Now they are testing in Detroit, New York, Munich, Israel, Tokyo, and China. That’s an interesting mix of countries and cities.
[0:50] Mobileye’s “trinity” consists of redunancy, crowd-sources maps, and the “RSS formal model for safety.”
[1:00] This video will feature a camera-only car, although Mobileye is building two entirely redundant systems – camera, and lidar plus radar.
[1:25] Mobileye refers to “Vidar” on their camera-only vehicles, without explaining what this is.
[3:00] The route starts at a stop light alongside Central Park. I noticed in the Jerusalem ride that the human driver pulled the vehicle into and out of traffic. Interesting they start the vehicle in the middle of the lane here.
[3:21] The vehicle is doing very accurate semantic segmentation to paint each pixel of free driving space in the camera image.
[3:53] Really nice job navigating around a double-parked UPS van, and then pedestrians using the vehicle’s lane to exit a parked car.
[4:09] The vehicle seems to get a little confused by a car pulling around a stopped mail truck. The Mobileye vehicle basically parks itself on the lane line and takes up two lanes.
[4:45] An awful lot of driving time in New York City is spent stopped a traffic lights.
[8:00] OMG so much time spent waiting at traffic lights! The case for congestion pricing makes itself.
[8:30] The side cameras pretty well detect and track pedestrians and vehicles that are across the median from the Mobileye vehicle. This is both impressive, and also a possible waster of computational power.
[10:45] The system classifies eight types of pixels: road, road edge, elevated barrier / curb, road under car, semi-drivable area, vehicle, pedestrians, general objects. Construction barriers are classified as purple “general objects.”
[11:45] Great job navigating the tunnel under (I think) Grand Central Station. Interestingly, the in-vehicle passenger display shuts off here.
[13:55] Another car cuts in aggressively and Mobileye acts appropriately to avoid a collision.
[13:55] The images switch from pixel segmentation to paint segmentation. The different painted road markings are highlighted. Presumably this is an input into Mobileye’s driving model.
[16:09] The paint segmentation video appears choppier than the pixel segmentation video did. I wonder if they devote less computation power to this.
[16:23] The steering wheel rocks quite a bit when stopped. It’s hard to tell how stable the ride is from a passenger perspective – the video certainly seems stable.
[17:11] Now the video switches to “Vidar.” It’s a little hard to tell what this is, other than that the video blacks out everything that’s occluded. Bounding boxes appear, although those also appeared with pixel segmentation.
[17:29] The car made a right turn on red, which is an interesting policy choice in and of it self. Also, it rolled through the stop line while making the turn, which is a traffic violation.
[17:48] Narrow street with parked vehicles on both sides, but the Mobileye vehicle seems awfully tentative here.
[18:26] The dog is not captured by the bounding box.
[18:58] Nice turn through a tough intersection with lots of pedestrians in the crosswalks.
[19:19] The vehicle stops well short of the stop line, seemingly intimidated by the giant SUV in the next lane.
[20:05] The vehicle leaves a lot more space before the next vehicle stopped in traffic, compared to human drivers, but frankly it seems safer.
[21:00] Nice job navigating metal construction plates on the road.
[21:30] Tough driving, wedged between traffic on the left and parked cars on the right. Great job.
[27:05] Nice driving in a lane demarcated by traffic posts.
[27:39] Tunnel! This would freak me out as a human driver. Mobileye handles it flawlessly.
[31:09] The vehicle must be going pretty fast through the tunnel. Maybe 45mph? That’s a testament to the range of the cameras.
[32:14] We’re in Long Island City now, and everything seems just a little more open and manageable.
[34:35] They still have traffic lights on Long Island.
[35:36] Absolutely brilliant navigation around a double-parked moving truck.
[36:10] The driver of a parked car opened his door and stepped into the lane. The Mobileye vehicle didn’t even flinch.
[36:42] The Mobileye vehicle was a bit tentative getting around this double-parked vehicle, and wound up in a stick situation. The car behind the Mobileye vehicle tries to go around at the same time as the Mobileye vehicle itself tries to go around, so now we’ve got two cars driving into the opposing lane.
[41:30] The vehicle is a bit tentative navigating past pedestrians as they clear a crosswalk. As a result, another pedestrian asserts himself, and the vehicle winds up completing a right turn on red.
[42:00] The route ends behind a stopped mail truck, but pretty clearly in a lane of moving traffic. Once again interesting that vehicle doesn’t really seem to handle pullovers.
Argo will launch robotaxis on Lyft’s ridesharing network in Miami by the end of 2021, and in Austin in 2022, according to both a press release last week and an article by CEO Brian Salesky announced in the company’s Ground Truth online magazine.
“Companies with the three key aspects required to launch, validate and scale an autonomous ride-hailing service in cities are directly working together: the self-driving system developed by the Argo AI team; the vehicles manufactured by one of our partners, Ford Motor Company; and the riders on Lyftâs transportation network.”
Argo CEO Brian Salesky
The term that catches my eye in that quote sentence is “riders.” Argo is working with Lyft not because of the ride-hailing network, or the app, but rather because Lyft is where the customers are. Aurora has recently been emphasizing a similar line of thinking with its Uber partnership.
For years, Stratechery analyst Ben Thompson has been developing Aggregation Theory, which purports to explain the most dominant companies that run two-sided networks. The “aggregators,” as Thompson calls them, increase their power by aggregating customer demand and ultimately bringing suppliers onto the network on the aggregator’s terms.
Thompson has labeled and analyzed Google, Facebook, Amazon, Netflix, Snapchat, Airbnb, and (most relevant to mobility) Uber as aggregators.
Salesky’s quote seems to validate Thompson’s theory that Argo is going to Lyft because Lyft has aggregated the customer demand.
On the one hand, that is the fastest way for Argo to tap into demand, gather data, and get to “scale,” a term Salesky emphasizes in his article. On the other hand, Thompson has emphasized again and again that the goal of aggregators is to modularize and commoditize suppliers, reducing their market power.
The reliable route to success in an aggregation-friendly market is to maintain your own customer relationships.
Under the deal announced this week, Ryderâs fleet maintenance facilities will act as terminals for TuSimpleâs freight network.
…
But this is not meant to be a hub-to-hub system where its customers would come and pick up freight, according to TuSimple President and CEO Cheng Lu.
…
Lu stressed that in most cases, especially for large-scale operators like UPS, TuSimple will take the freight directly to the customerâs distribution centers.
This version has brand new courses and projects on deep learning, sensor fusion, localization, and planning, with brand new instructors. It’s a fantastic 2021 update to the original program, which dated to 2016.
I helped build the first half of this new version back when I was still at Udacity last year, and then my long-time Udacity colleague, Michael Virgo, led the effort to completion.
A particularly awesome aspect of this new version of the Nanodegree program is that we created in conjunction with Udacity’s long-time partner, Mercedes-Benz, and with a new partner, Waymo. Several projects in the program teach students how to work with the Waymo Open Dataset, which is a fantastic opportunity for students to gain hands-on skills.
With the relaunch of @udacity's Self-Driving Car Engineer Nanodegree, students interested in autonomous driving technology can hone skills they may one day apply in the field. Learn from experts like Waymoâs Head of Research Drago Anguelov: https://t.co/hZ9nATHaUHhttps://t.co/1Xz5nTihVI
Last week, TechCrunch reported that Aurora intends to become a publicly-traded company, via a merger with a special purpose acquisition company (SPAC), specifically Reinvent Technology Partners Y. To my mind, the most interesting parts of this announcement related to the massive amounts of capital that AV companies need and are frequently able to raise.
Aurora’s valuation will be $13 billion, despite an absence of revenue. After closing the SPAC, Aurora will have about $2.5 billion of cash on-hand.
To figure out how far that $2.5 billion will take them, we can do some back of the envelope math. According to TechCrunch, Aurora has about 1,600 employees. Since Aurora remains in the research and development stage, most of those employees are probably engineers, and many of them are probably well-compensated machine learning and robotics engineers.
For a run-of-the-mill web software company, I might assume a “fully loaded” cost of $150,000 to $200,000 per engineer, per year (including salary, benefits, taxes, and implicit equipment, such as rent). Aurora has a bunch of equipment costs, like buying trucks and sensors and data storage, so let’s bump that fully loaded cost to $225,000 per engineer, per year.
1,600 engineers times $225,000 per engineer equals $360 million in costs per year. That’s surely not exactly correct, but it gives a sense of the order of magnitude.
That suggests that Aurora’s $2.5 billion of post-SPAC cash will last around 7 years, although probably Aurora has significant expansion plans that will both increase its expenses and also generate revenue within that timeframe.
Also notable is Aurora’s current cash situation. The $2.5 billion in post-SPAC cash includes, according to TechCrunch, approximately $1 billion dollars from the Reinvent SPAC itself, plus another $1 billion in private investment in public entity (PIPE) financing attached to the SPAC merger. That suggests Aurora’s current cash pile is about $500 million dollars, which is approximately one year of burn.
This is a company that probably needs to raise funds soon, one way or another. Looks like they’ve found a very lucrative way to do that.
Former Waze CEO Noam Bardin recently joined the NFX podcast for an hour to discuss the history of the company, from a tiny, scrappy startup in Israel, to a global service used by millions.
Although of course I’d tried Waze (although my preferred navigation is Google Maps), I’d not heard of Bardin until he made pretty big waves early this with, “Why did I leave Google or, why did I stay so long?”
Interestingly, the NFX episode didn’t really touch on the Google acquisition. They referenced Bardin’s existing write-up and basically referred people to that if they were interested in the details.
Instead, the podcast focused on the early days and hypergrowth phases of Waze.
Waze bootstrapped their own maps, instead of licensing maps providers. When a user in a new area downloaded Waze, they would get a blank canvas, and they would essentially draw the map themselves, by driving. Then they could log onto the website later to polish the map they’d drawn.
Waze’s version of the 1/9/90 rule was that 1% of users would build the map, 9% would report traffic, and 90% would consume.
Waze churned a lot of users for a long time because the product wasn’t good enough. People loved the promise of the product, but Waze couldn’t deliver fast enough. Figuring this out gave them confidence that if they just executed fast enough, users would come.
The biggest competitor for many years wasn’t Google Maps or Apple Maps, but rather FourSquare. However, FourSquare never broke out of the “cool kids” customer segment, whereas Waze started outside of the “cool kids” segment by default, because their users were boring suburbanites driving to work.
Global companies need to succeed in the US. This means that companies based in the US are more likely to succeed globally, but also companies in tiny countries (e.g. Israel, but also Nordic or Baltic countries). Companies in tiny countries have no home market, so they have to go global from Day 1. The toughest spot is middle-sized countries (e.g. Germany). They can grow initially in their home market, but eventually they are likely to be consumed by local (i.e. not US) product priorities, and never make the leap to become global winners.
Waze’s best information is that the maps market is now 40% Google Maps, 35% Waze, and 25% Apple Maps.