One, about 20% of US organ donations come from car accident victims. Presumably self-driving cars will reduce the number of organs available.
Two, a common place to opt-in to organ donation is at the DMV, while obtaining or renewing a driver’s license. Presumably self-driving cars will reduce the number of people who get driver’s licenses, and thus reduce the number of people who opt-in to organ donation.
The rest of the article is mostly about ways to improve the organ donation system in the US, irrespective of self-driving cars.
But it is an interesting case study a second-order effect autonomous vehicles will have on our world.
Of course, this is just a proposal. Before this could ever take effect, a new presidential administration will be in place and they might have their own views.
Peterson notes some concerns:
Are we moving to a world where bicycles need V2V and pedestrians need V2V? What does it mean for an act of mobility to require continuous government permission? (If you are not broadcasting, are you illegal? Will you be shut down in real time?)
I agree and would prefer if V2V arose as a de facto standard, instead of a de jure standard mandated by the government. This might be tougher for vehicle-to-infrastructure communication, which necessarily involves communication with government property, like traffic lights.
But if SMTP could rise as a de facto standard, the cause does not seem lost.
The full blog post is hard to excerpt, but Levinson emphasizes that if we come to rely on vehicle-to-vehicle communication to navigate intersections (for example), a bug in the system or an unexpected event (he suggests a deer crossing the road) could bring traffic to a halt and possibly cause massive collisions.
I’m a little less pessimistic on that front, but Levinson is a professor of transportation and has been working on this problem for a decade, so I might defer to his logic.
Pay attention to the part where he talks about compute platforms and power consumption. That was my team!
Well, to make fully autonomous SAE-defined level 4-capable vehicles, which do not need a driver to take control, the car must be able to perform what a human can perform behind the wheel. Our virtual driver system is designed to do just that. It is made up of:
Sensors — LiDAR, cameras and radar
Algorithms for localization and path planning
Computer vision and machine learning
Highly detailed 3D maps
Computational and electronics horsepower to make it all work
Gizmodo has a short writeup of a crash in which Tesla Autopilot hit the brakes before the human driver realized what was going on, thereby avoiding a pileup.
In November I gave a talk the Bay Area AI Meetup entitled, “How to Become a Self-Driving Car Engineer”. A fair bit of the talk was an overview of the Udacity Self-Driving Car Engineer Nanodegree Program. But we also touched on a variety of other topics related to autonomous vehicles, particularly during the question and answer session.
Uber has expanded its self-driving taxi trial to the home of technology and autonomous vehicles; San Francisco. Starting from 14 December, Uber customers with a credit card attached to a San Francisco billing address are eligible to ride in a fleet of five self-driving cars.
“Our cars departed for Arizona this morning by truck,” said an Uber spokesperson in an email to The Verge. “We’ll be expanding our self-driving pilot there in the next few weeks, and we’re excited to have the support of Governor Ducey.”
The move comes after California’s Department of Motor Vehicles revoked the registration of Uber’s 16 self-driving cars because the company refused to apply for the appropriate permits for testing autonomous cars.
They want to embed lidar in the grill of a car. This seems like a difficult vantage point, since the sensor won’t have a 360-degree view of the environment.
They plan to deliver prototypes next summer.
Based on their website, they seem to target two markets: autonomous vehicles and the military.
They’re based in Bozeman, Montana, which is a great town, but hardly a tech hub. Given the cost of housing in Silicon Valley, though, I’m tempted to apply for a job there right now.
Chollet is the author of Keras, which is a deep learning library we use in the Udacity Self-Driving Car Program. He explains at length why artificial intelligence generally, much like autonomous driving specifically, is not a solved problem.
Overall: deep learning has made us really good at turning large datasets of perceptual inputs (images, sounds, videos) and simple human-annotated targets (e.g. the list of objects present in a picture) into models that can automatically map the inputs to the targets. That’s great, and it has a ton a transformative practical applications. But it’s still the only thing we can do really well. Let’s not mistake this fairly narrow success in supervised learning for having “solved” machine perception, or machine intelligence in general. The things about intelligence that we don’t understand still massively outnumber the things that we do understand, and while we are standing one step closer to general AI than we did ten years ago, it’s only by a small increment.
Udacity’s partner, Auro Robotics, has been testing it’s self-driving shuttle on the campus of Santa Clara University for a year. In November they turned it loose on the public for the first time.
During my rides, it was clear that the students are used to the Aero — so used to it that they don’t even think about getting out of its way. That can lead to a somewhat frustrating ride as the vehicle patiently trails a slow-walking student; it has a horn, but is too polite to beep. Visitors to campus, however, are at first puzzled, then thrilled, to learn that they are being chauffeured in a car that is driving itself. (See video, above.) And if you’re in the area, and have never had a ride in an autonomous vehicle, just stand in front of the parking garage for a while — the shuttle won’t ask to see your ID.