KDnuggets Home » News » 2015 » Mar » Opinions, Interviews, Reports » Juergen Schmidhuber AMA: The Principles of Intelligence and Machine Learning ( 15:n08 )

Juergen Schmidhuber AMA: The Principles of Intelligence and Machine Learning


Jürgen Schmidhuber, pioneer in innovating Deep Neural Networks, answers questions on open code, general problem solvers, quantum computing, PhD students, online courses, and the neural network research community in this Reddit AMA.



By Grant Marshall.

juergen schmidhuber Jürgen Schmidhuber, director at the Swiss AI Lab IDSIA, has worked on improving algorithms for deep learning since 1991. He began answering questions on his /r/MachineLearning AMA on March 4th, and there were certainly some interesting questions and even more interesting answers. Below we look at and discuss the top questions by upvotes.

1. Why doesn't your group post its code online for reproducing the results of competitions you've won, such as the ISBI Brain Segmentation Contest?

In response to this question, Dr. Schmidhuber first links the substantial amounts of open code from his lab like PyBrain and RNNLIB, much of which arose from competitions he and other members of his lab participated in. Beyond this, Dr. Schmidhuber explains that some code ends up involved in industrial projects, making it more difficult to release. In the near future, though, he says there are plans to release even more code for RNNs to act as a successor to PyBrain. That will certainly be exciting!

2. What is something that's true, but almost nobody agrees with you on?

This question is almost assured to lead to interesting discussion, and in this case, it did. Here are 2 items given as answer.
Many think that intelligence is this awesome, infinitely complex thing. I think it is just the product of a few principles that will be considered very simple in hindsight, so simple that even kids will be able to understand and build intelligent, continually learning, more and more general problem solvers. Partial justification of this belief:
(a) there already exist blueprints of universal problem solvers developed in my lab, in the new millennium, which are theoretically optimal in some abstract sense although they consist of just a few formulas ( unilearn.html, goedelmachine.html).
(b) The principles of our less universal, but still rather general, very practical, program-learning recurrent neural networks can also be described by just a few lines of pseudo-code, e.g., rnn.html, compressednetworksearch.html.

General purpose quantum computation won’t work (my prediction of 15 years ago is still standing). Related: The universe is deterministic, and the most efficient program that computes its entire history is short and fast, which means there is little room for true randomness, which is very expensive to compute. What looks random must be pseudorandom, like the decimal expansion of Pi, which is computable by a short program. Many physicists disagree, but Einstein was right: no dice. There is no physical evidence to the contrary randomness.html.
For example, Bell’s theorem does not contradict this. And any efficient search in program space for the solution to a sufficiently complex problem will create many deterministic universes like ours as a by-product. Think about this. More here computeruniverse.html and here.


3. How do you recognize a promising machine learning phd student?

This is definitely a challenging question. He points out that it's often not possible to recognize someone's brilliance immediately because everyone has their own unique strengths. But in general, he sees tenacity and willingness to keep attacking a problem as the one unifying factor for all successful students.

4. Do you plan on delivering an online course (e.g. on coursera) for RNNs?

This is something I would be interested in seeing. Dr. Schmidhuber states that he's wanted to for years, but doesn't have the time. Interestingly, this question was also asked of Geoffrey Hinton in his AMA. If he does find the time, I'm sure it would be well-received!

5. Why is there not much interaction and collaboration between the researchers of Recurrent NNs and the rest of the NN community, particularly Convolutional NNs (e.g. Hinton, LeCun, Bengio)?

Dr. Schmidhuber responds to this question by pointing out the compatibility of CNNs and RNNs. He also links to recent research that integrates the two types of networks in particular applications, such as image caption generation. He further addresses the perceived differences in the two communities by pointing out that they are mostly geographical differences, not differences in methodologies or philosophies.

Conclusion

This post covered some of the top questions in the AMA, but it certainly didn't cover them all. If you're interested in seeing the rest of the questions and his responses, you can go see them here. At the time of this writing, he is also revisiting the AMA occasionally, so you may still have a chance to have a question answered!

Related:

Sign Up

By subscribing you accept KDnuggets Privacy Policy