Caveat lector: I began composing this back in January, then set it aside to work on thesis stuff. It has been overlooked ever since. NIPS is getting pretty far in the rear-view, but the workshops this year were excellent, and deserve some mention. So, though it’s under-prepared, and looking back I see I didn’t take nearly as many notes as I should have, I’m going to force this post out now. Please don’t interpret the sparsity of my notes as lack of interest.
The last two days of NIPS2014 were all about the workshops. On Friday, I attended the inaugural offering of Machine Learning for Clinical Data Analysis, as well as the workshop on Deep Learning and Representation Learning.
Machine Learning for Clinical Data Analysis
I wish I had taken better notes from the morning session of ml4chg. The invited talk by Gilles Clermont was structured around trying to find the right questions to ask with respect to using machine learning in clinical settings. The ICU is expensive, and technologically intensive. Mortality is high, and risk is high. Doctors want to be convinced beyond just figures showing AUPR, AUROC are good. They want to be convinced that we can use fancy computer models to reduce errors, and manage data. I left thinking that there remain a lot of questions to be answered (and relationships between clinicians and researchers to be built) before any rubber hits the road. The Q&A session that finished off the morning underscored this last point.
I arrived at the first (!) poster session. Here were a couple that caught my eye.
- Abraham Ruderman (Nicta): Pretaining speeds up search for good features
- Guillaume DesJardins (Université de Montréal): Deep Tempering
- Thomas Unterthiner (Johannes Kepler University): Rectified Factor Networks with Dropout
Invited talk by Rich Schwartz (BBN): Fast and Robust Shallow Neural Networks Joint Models for Machine Translation
- shallowest ngram translation model. Simpler was better, but no phrases, no previous translations that were incorporated.
- Yoshua had loads of ideas to improve what was presented.
Invited talk by Vlad Mnih (Google DeepMind): Deep Reinforcement Learning and REINFORCEd Deep Learning:
- RL and deep learning should be better friends.
- big problems: noisy, delayed signals. Nns don’t necessarily work well in non stationary environments, which is the case in RL
- Q learning updates: smoothed action reward scheme. Froze params of network that generated the target actions, periodically refreshed this.
- another nice thing about RL: add non-differentiable components for NNs using reinforce.
- attention model reduces the amount of computation for convnets, fixes the amount of conv
Invited talk by Surya Ganguli (Stanford): From statistical physics to deep learning: on the beneficial role of dynamic criticality, random landscapes, and the reversal of time.
- good stuff from statistical physics that can help deep learning (published work on his website)
- theory of deep learning (initializing weight dynamics theory), says number of epochs does not need to increase with depth, if chosen properly.
- questions from: Geoff Hinton, Then Yann Lecun, then Yoshua Bengio. Got a big laugh from the audience, but was in itself instructive; the three wise men all believe that physics has something to teach us about the mechanics of error surfaces in deep learning.
That’s it. There were a few photos of posters that I wanted to recall the details later that I won’t post, since while I asked for permission to take the photos, I didn’t think to ask permission to share them. The Saturday workshops were all about MLCB for me, to which I’ll devote a separate post.