Deep Learning

I have been studying a little bit about deep learning recently, and hope to learn more over the next week.

In particular, I have been progressing through NVIDIA’s introductory Deep Learning course, which offers an overview of Deep Neural Networks (DNNs). The course covers three DNN frameworks (Caffe, Theano, and Torch) and one visualization tool (DIGITS).

This type of course is super-helpful, in that it’s geared toward practitioners and problem-solving, and less on the theory of DNNs. The Caffe framework, combined with the DIGITS visualization tool, seems particularly well-suited to quickly constructing a DNN and seeing where it leads.

So I’m a big fan of the NVIDIA course.

Next I’d like to take either Coursera’s Neural Networks for Machine Learning, or Udacity’s Deep Learning.

Coursera’s course is taught by the famed neural network researcher Geoffrey Hinton, whereas Udacity’s courses have a great UI and often a more practical (versus theoretical) approach.

I’ll let you know what I choose, and let me know if you have any recommendations!

Google’s (First?) Accident in Autonomous Mode

Google’s self-driving car has been in a number of accidents over the years, but none were the fault of the autonomous driving software. The accidents all either occurred when the vehicle was in “human-driver” mode or were the fault of the driver of a different vehicle (Google’s cars have been rear-ended several times).

On Valentine’s Day, however, Google filed an accident report that might possibly be first accident for which the self-driving car software was at fault.

This was a very minor accident with no injuries, and it’s not completely clear from Google’s self-report who was at fault, although it seems like the Google car was. I would be curious to see how the insurance companies involved parcel out blame.

From the report:

A Google Lexus-model autonomous vehicle (“Google AV”) was traveling in autonomous mode eastbound on El Camino Real in Mountain View in the far right-hand lane approaching the Castro St. intersection. As the Google AV approached the intersection, it signaled its intent to make a right turn on red onto Castro St. The Google A V then moved to the right-hand side of the lane to pass traffic in the same lane that was stopped at the intersection and proceeding straight. However, the Google AV had to come to a stop and go around sandbags positioned around a storm drain that were blocking its path. When the light turned green, traffic in the lane continued past the Google AV. After a few cars had passed, the Google AV began to proceed back into the center of the lane to pass the sandbags. A public transit bus was approaching from behind. The Google AV test driver saw the bus approaching in the left side mirror but believed the bus would stop or slow to allow the Google AV to continue. Approximately three seconds later, as the Google AV was re-entering the center of the lane it made contact with the side of the bus. The Google AV was operating in autonomous mode and traveling less than 2 mph, and the bus was travelling at about 15mph at the time of contact.

The Google AV sustained body damage to the left front fender, the left front wheel and one of its driver-side sensors. There were no injuries reported at the scene.

Ford’s CTO on Autonomous Vehicles

Automotive News has a short and fun interview with Raj Nair (who’s LinkedIn title is “Executive Vice President and Chief Technical Officer — Global Product Development at Ford Motor Company”).

The write-up is mercifully concise. Here’s a highlight:

The human body is an amazing array of sensors — two great optical sensors, auditory sensors and balance sensors that provide information that the brain doesn’t only perceive but also filters.

To reproduce all of that with a combination of lidar [a kind of radar based on laser beams], radar and ultrasonic sensors is a big challenge. Then there are the algorithms. They are reasonably straightforward for the basic aspects of driving. If you can see the white lines it’s not that hard to steer the vehicle between them. But for all the other things that your mind works through when driving, you need to be prepared for all of them. This increases the level of sensor capability processing you need.

Read the whole thing.

Almono

About a year ago, Uber more or less bought out the famed robotics department at Carnegie Mellon University in Pittsburgh.

The goal, of course, was to acquire a team capable of building autonomous vehicles.

Since then, however, not much news has come out of Steel City regarding Uber’s autonomous vehicle plans.

This week, though, Uber put forth a plan to turn an old steel mill into a vehicle test-drive site. The site, known as the Almono, is an exciting development for Pittsburgh.

There is some local opposition, however, mostly in the form of Pittsburgh-based Uber drivers who are not looking forward to being replaced by robots.

Environmental Impact

A team of researchers has put a little more thought into a topic I’ve been curious about for a while: what will be the environmental impact of self-driving cars?

The projection is that an self-driving cars will increase energy consumption by 5% to 60%, as people increase automotive usage.

It’s worth noting that this is an increase in energy used, which is correlated with environmental impact, but the fuel used to provide the energy matters greatly. Gasoline is a relatively unfriendly fuel for the environment, whereas electric cars charged by wind power are very environmentally friendly. In between, it gets a little fuzzy, although studies indicate that driving an electric vehicle powered by coal-fired plants is probably worse than a gas-powered car.

Of course these studies rely on significant assumptions about how people will behave in a future autonomous vehicle world, and we’ll have to see how those assumptions play out in practice.

One line of thought is that autonomous vehicles may be more problematic in Europe, where train-travel is currently common and might decrease, as compared to the US, where autonomous vehicles will mostly be a substitute for human-driven vehicles.

Mobileye Powers Forward

One of the key suppliers for autonomous vehicle manufacturers is the Israeli company Mobileye. Mobileye specializes in computer vision and produces the chips and software to help cars “see” the road.

The company has had to deal with significant bad press for the last six months, including the rumor that Tesla was looking to drop them and move to another supplier. Tesla later denied that rumor.

Mobileye just announced a big fourth-quarter 2015, though, and things appear to be on the upswing.

According to Tech Republic, here are the big takeaways:

1. Already known for its partnership with Tesla, Mobileye has signed up its third major automaker, Nissan, to add to existing partnerships with GM and VW.

2. Automakers are outsourcing the creation of vision-assistance tech instead of doing it themselves.

3. By teaming up with automakers, Mobileye is gathering information from cars on the road, which can allow it to “crowdsource” maps. This access to data gives it an edge over tech companies like Google, who have not yet announced partnerships with car companies.

Ford Will Add a Lot of Software Engineers

The Mobile World Congress in Barcelona has expanded beyond strictly mobile technology, and has become something of a CES, Part II.

In that vein, Ford CEO Mark Fields announced at the conference that Ford will be tripling its investment in autonomous driving engineering staff, over the next five years.

“When the first Ford autonomous vehicle comes out, it will be an autonomous vehicle designed to serve millions of customers — not just for those who buy luxury vehicles,” Fields stated.

That’s pretty clearly aimed at Tesla.

To date, Ford has been the mainline auto manufacturer most dedicated to self-driving cars, with its MCity project and its University of Michigan partnerships.

With Toyota and Volvo and GM all stepping up their efforts, though, Ford is finally getting some competition. And thankfully they’re embracing the challenge.