One module in Udacity’s Self-Driving Car Nanodegree program will cover deep learning, with a focus on automotive applications.
We’ve decided to use the TensorFlow library that Google has built as the main tool for this module.
Caffe, an alternative framework, has lots of great research behind it, but TensorFlow uses Python, and our hope is that this will make learning it a lot easier for students.
Even with TensorFlow, however, we face a choice of which “front-end” framework to use. Should we use straight TensorFlow, or TF Learn, or Keras, or the new TF-Slim library that Google released within TensorFlow.
Right now we’re learning toward TF Learn, almost by default. Straight TensorFlow is really verbose, TF-Slim seems new and under-documented. Keras and TF Learn both seem solid, but the TF Learn syntax seems a little cleaner.
One big drawback to TF Learn, though, is the lack of easily integrated pre-trained models. I spent a while today trying to figure out how to migrate pre-trained AlexNet weights from Caffe to TF Learn.
So far, no one solution is jumping out at me as perfect. Let me know in the comments if you’ve got a suggestion.