Why Does Deep Learning Work?

Many researchers recently trying to open the “black-box” of the deep learning. Here we summarize these efforts of how neural nets of deep learning are evolve and how Spin Funnel and deep learning are related.



By Charles H Martin, PhD (Calculation Consulting).

This is the big question on everyone’s mind these days. C’mon we all know the answer already:

“the long-term behavior of certain neural network models are governed by the statistical mechanism of infinite-range Ising spin-glass Hamiltonians” [1]   In other words, 

Multilayer Neural Networks are just Spin Glasses?  Right?

This is kinda true–depending on what you mean by a spin glass.

In a recent paper by LeCun, he attempts to extend our understanding of training neural networks by studying the SGD approach to solving the multilayer Neural Network optimization problem [1].   Furthermore, he claims

None of these works however make the attempt to explain the paradigm of optimizing the highly non-convex neural network objective function through the prism of spin-glass theory and thus in this respect our approach is very novel. And, again, this is kinda true

But here’s the thing…we already have a good idea of what the Energy Landscape of multiscale spin glass models* look like–from early theoretical protein folding work (by Wolynes, Dill, etc [2,3,4]).  In fact, here is a typical surface:

Energy Landscape of MultiScale Spin Glass Model

*[technically these are Ising spin models with multi-spin interactions]

Let us consider the nodes, which above represent partially folded states, as nodes in a multiscale spin glass–or , say, a multilayer neural network.  Immediately we see the analogy and the appearance of the ‘Energy funnel’ In fact, researchers have studied these ‘folding funnels’ of spin glass models over 20 years ago [2,3,4].  And we knew then that

as we increase the network size, the funnel gets sharper

3D Energy Landscape of the Folding Funnel of a Spin Glass

Note: the Wolynes protein-folding spin-glass model is significantly different from the p-spin Hopfield model that LeCun discusses because it contains multi-scale, multi-spin interactions.  These details matter.

Spin glasses and spin funnels are quite different.  Spin glasses are highly non-convex with lots of local minima, saddle points, etc.  Spin funnels, however, are designed to find the spin glass of minimal-frustration, and have a convex, funnel shaped, energy landscape.

This seemed to be necessary to resolve one of the great mysteries of protein folding: Levinthal’s paradox [5].  If nature just used statistical sampling to fold a protein, it would take longer than the ‘known’ lifetime of the Universe.  It is why Machine Learning is not just statistics.

Spin Funnels and Spin Glasses
Spin Funnels (DFM) vs Spin Glasses (SG) [4]

Deep Learning Networks are (probably) Spin Funnels

So with a surface like this, it is not so surprising that an SGD method might be able to find the Energy minima (called the Native State in protein folding theory). We just need to jump around until we reach the top of the funnel, and then it is a straight shot down.  This, in fact, defines a so-called ‘folding funnel’[4]

So is not surprising at all that SGD may work.

Recent research at Google and Stanford confirms that the Deep Learning Energy Landscapes appear to be quite smooth and generally convex! [6]

Note that a real theory of protein folding, which would actually be able to fold a protein correctly (i.e. Freed’s approach [7]), would be a lot more detailed than a simple spin glass model.  Likewise, real Deep Learning systems are going to have a lot more engineering details–to avoid overtraining (Dropout, Pooling, Momentum) than a theoretical spin funnel.

It is not that Deep Learning is non-convex–is that we need to avoid over-training

Still, hopefully we can learn something using the techniques developed to study the energy landscape of   multi-scale spin glass/ spin funnels models. [8,9], thereby utilizing methods  theoretical chemistry and condensed matter physics.

Indeed, I believe this is the first conjecture that Supervised Deep Learning is related to a Spin Funnel.   In the next post, I will examine the relationship between Unsupervised Deep Learning and the Variational Renormalization Group [10].

Learn more about us at:   http://calculationconsulting.com

[1] LeCun et. al.,  The Loss Surfaces of Multilayer Networks, 2015

[2] Spin glasses and the statistical mechanics of protein folding, PNAS, 1987

[3] THEORY OF PROTEIN FOLDING: The Energy Landscape Perspective, Annu. Rev. Phys. Chem. 1997

[4] ENERGY LANDSCAPES, SUPERGRAPHS, AND “FOLDING FUNNELS” IN SPIN SYSTEMS, 1999

[5] From Levinthal to pathways to funnels, Nature, 1997

[6] QUALITATIVELY CHARACTERIZING NEURAL NETWORK OPTIMIZATION PROBLEMS, Google Research (2015)

[7]  Mimicking the folding pathway to improve homology-free protein structure prediction, 2008 

[8] Funnels in Energy Landscapes, 2007

[9] Landscape Statistics of the Low Autocorrelated Binary String Problem, 2007

[10] A Common Logic to Seeing Cats and Cosmos, 2014

Original. Reposted by permission.

Bio: Charles H Martin, PhD Data Scientist & Machine Learning Expert in San Francisco Bay Area, currently with Calculation Consulting.

Related: