NIPS 2017 Key Points & Summary Notes

Third year Ph.D student David Abel, of Brown University, was in attendance at NIP 2017, and he labouriously compiled and formatted a fantastic 43-page set of notes for the rest of us. Get them here.



NIPS 2017NIPS 2017 was held last week in Long Beach, and by all accounts it lived up to the hype. While I was not in attendance (I wish I had been), third year Ph.D student David Abel, of Brown University, was, and he labouriously compiled and formatted a fantastic 43-page set of notes which can only be described as inferiority complex-inducing. He has made them available to all in PDF form, and has encouraged their distribution.

While David was obviously not able to attend every talk and tutorial at NIPS, he clearly managed to pack his schedule, and we get to live vicariously through his experience, if even after-the-fact.

On behalf of all of us who were not able to temporarily append "@ #NIPS2017" to our Twitter display names, David, I thank you for your efforts. If you are interested in reading a discussion of these notes on Hacker News, you can find that here.

Also of note, David presented a paper titled "Toward Good Abstractions for Lifelong Learning" (David Abel, Dilip Arumugam, Lucas Lehnert, Michael L. Littman) at the NIPS Hierarchical RL Workshop.

A few of the highlights from David's notes follow (emphasis added), along with a couple of videos of referenced talks.

 

Ali Rahimi’s test of time talk. This had lots of conversation buzzing throughout the conference. In the second half, he presented some thoughts on the current state of machine learning research, calling for more rigor in our methods. This was heavily discussed throughout the conference, (most) folks supporting Ali’s point (at least those I talked to), and a few others saying that his point isn’t grounded since some of the methods he seemed to be targeting(primarily deep learning) work so well in practice. My personal take is that he wasn’t necessarily calling for theory to back up our methods so much as rigor I take the call for rigor to be a poignant one. I think it’s uncontroversial to say that effective experimentation is a good thing for the ML community. What exactly that entails is of course up for debate (see next bullet).

 

 

Joelle Pineau’s talk on Reproducibility in Deep RL. One experiment showed that two approaches, let’s call them A and B, dominated one another on the exact same task depending on the random seed chosen. That is, A achieved statistically significant superior performance over B with one random seed, while this dominance was flipped with a different random seed. I really like this work, and again take it to be at just the right time. Particularly in Deep RL, where most results are of the form: “our algorithm did better on tasks X and Y”.

 

 

Josh Tenenbaum is giving a talk on reverse engineering intelligence from human behavior.

Paper from Warneken and Tomasello [55] on emergent helping behavior in infants. He showed a video where a baby comes over and helps an adult without being told to do so. Cute!

Goal: Reverse-engineering common sense.

Tool: probabilistic programs. Models that can generate next states of the worlds that act as an approximate “game engine in your head” is really powerful. Potentially the piece missing. A mixture of intuitive physics and intuitive psychology.

Engineering common sense with probabilistic programs:

  • What? Modeling programs (game engine in your head).
  • How? Meta-programs for inference and model building, working at multiple timescales, trading off speed, reliability, and flexibility.

 

 

Kate Crawford: The Trouble with Bias

Kate is amazing! Today she’s talking about bias in AI/ML.

Takeaway: Bias is a highly complex issue that permeates every aspect of machine learning. We have to ask: who is going to benefit from our work, and who might be harmed? To put
fairness first, we must ask this question.

 
Download David's full PDF notes here.

 
Related: