1st lesson of NVIDIA Deep Learning course

Yesterday the first lesson of NVIDIA’s free course on Deep Learning took place. I was curious about the level of information it could provide me: I have been studying and tinkering with deep learning frameworks in the last year, in order to prepare a prototype for a Startup Accelerator (that eventually rejected us :( ), but I have not a strong theoretical background about it.

The course is organized in 5 online classes, with a Q&A session after each lesson. The next lesson is scheduled for 5th August 2015, 9am Pacific Time (6pm CET in Italy). You need to register before the lesson in order to receive an email containing the link to the live streaming.

During the course the instructor announced that the slides and also the whole video of the lesson will be available online, but they are not available yet as I write.


The argument of the first lesson was a short overview on neural networks, with their peculiarity among neural networks, and a bit of history, with the comeback from 2010 on after years (decades?) of darkness, thanks to new algorithms and the advent of new powerful GPU with thousands of cores.

The instructor seem highly competent and was able to answer to all the questions he was asked with ease (you can send questions through a little chat present in the course website).

Unfortunately it said pretty much all things I had already heard somewhere else. In particular I enjoyed listening to the deep learning history from the protagonists of the story theirself, such as Yann LeCun. LeCun was  one of the pioneers in this field of research, he started in the 80s and “He kind of carried the torch through the dark ages” as Geoffrey Hinton – another prominent figure in the Deep Learning field – said.

The next lesson will be about the DIGITS interactive training system, that is an open source tool developed by NVIDIA in order to facilitate the training of deep networks such as Deep Convolutionary Neural Networks. During my experiments I already encountered DIGITS, and I have to admit it’s several order of magnitudes more friendly that the training routines provided with all the DL frameworks I encountered (Caffe, libccv, DeepBeliefSDK).


Each lesson is coupled with a lab powered by qwikLABS, a platform for real time training on cloud environments. This technology allow the student to launch Deep Learning tasks without owning a CUDA GPU. To prevent people for exploiting the offered computational resource, you have a usage limit of 45 or 55 minutes on the environment.

The environment runs on the AWS cloud, in particularly the Linux GPU Instances, offering  a NVIDIA K520 GPU with 4GB of ram and 1536 CUDA cores: this configuration is more than enough for some tests and a course, but state of the art CNN architectures requires  6-8 GBs of ram to fit in, so I guess you would require a bigger AWS GPU instance, or some kind of distributed algorithm to split the training among multiple nodes.

The lab was pretty straight-forward, you had not to write a single line of code, you were just ordered to execute some commands. The commands involved multiple DL frameworks, such as Overfeat, Caffe, Theano and Torch.

Users have to launch some training and classification tasks, and the lab will point out the differences in terms of performance between CPUs and GPUs on those tasks (we are talking about a 50x factor, and I am not sure the GPU task were even using the newest libraries that allow even better performances, like cuDNN). I hope the next lab will be a little more interactive.


Summing up, this first lesson was fairly organized, but the contents were a little poor. But after all, this was just an introductory lesson, I am confident next lesson will be better.