KDnuggets Home » News » 2017 » Dec » Tutorials, Overviews » Some Musings on Capsule Networks and DLPaper2Code ( 17:n47 )

Some Musings on Capsule Networks and DLPaper2Code


Only the Godfather of Deep Learning did it again and came up with something brilliant — adding layers inside existing layers instead of adding more layers i.e nested layers.... giving rise to the Capsule Networks!



By Raksham Pandey, Vivekanand Education Society's Institute Of Technology.

Don’t you look at the CapsNet architecture and wonder... Wouldn’t it have been amazing if I had come up with this idea?

I mean, it was visible to all of us that pooling seemed just way too convenient amidst everything else about CNNs; just selecting the maximum weight among a specific number of weights and using that in the upcoming layers. Pooling was probably the easiest thing to visualize and understand in the entire architecture, which seemed very crude.

But still, only the Godfather of Deep Learning did it again and came up with something brilliant — adding layers inside existing layers instead of adding more layers i.e nested layers.... giving rise to the Capsule Networks!

Improvements in CNNs started in the direction of adding more and more layers, playing with parameters and gradually towards connecting distant layers to each other to make sense out of their outputs once they were concatenated, when it was observed that simply increasing the number of layers also eventually reduces the performance after a certain point.

Everyone was so engrossed in tweaking the existing architectures that thinking about having a modified fundamental unit never crossed any of our minds.

Like Andrew Ng recently said, we need to stop concentrating on just publishing papers and instead start making stuff because looking at the current scenario of rapid publishing that’s going on, each paper is like adding a drop of water to the vast ocean.

“We have enough papers. Stop publishing, and start transforming people’s lives with technology!” 

— Andrew Ng

To create breakthroughs, we shouldn’t just keep working on improving the architectures that the pioneers of AI devised and stick to those concepts only but also try to think about something new... something groundbreaking! After all, no one really knows how exactly the brain functions.

“You can get inspiration from biology, but you don’t want to just copy it. Retracing evolution will be very difficult from an engineering point of view.” 

— Yann LeCun, Director of AI research at Facebook.

We accept the ideas that seem to work, look like the best replications of the human brain and have mathematics to support them, but no one can guarantee you that this is how the brain functions.

We need to come out of our shells and think out of the box... Who knows, anyone of us could be the next disrupter!

Pioneers like Geoff Hinton, Andrew Ng, Yoshua Bengio and my role models like Andrej Karpathy, Richard Socher and just so many names that keep getting added to this Hall of Fame for their amazing contributions everyday— I’m sure that even they want us to think differently and not just stick to what they say, because that’s how the AI will move forward. Obviously, this is possible only when we go through their work in depth and get inspired to think in the direction of innovation!

I simply can’t miss out on mentioning the recently published amazing work by a group of smart minds, DLPaper2Code!

This mind blowing piece of work by a group of Indians at IBM Research, extracts and comprehends deep learning design flow diagrams and tables in a research paper and converts them into an abstract computational graph which is further converted into source code in both Keras and Caffe that’s ready to be executed, all in real-time!

There’s still scope for improvement like the team mentioned too, such as expanding it to include PyTorch and Tensorflow, and it might expect us to bring some kind of standardization in the way we write those research papers, but this surely is worthy of all the praise in the world for solving one of the most widespread problems with research papers; absence of implementation available for reference. Everyone can relate very well with this as I’m sure we all must’ve come across at least one such paper whose contents looked like a web of complicated concepts related to rocket science!

You see, that’s another strong step towards directing efforts into using the insane amounts of information and theories proposed in the innumerable research papers to create things that could really be called 'intelligent' and could completely change the way we live our lives today.

Soon, ideas and imagination would be valued more, now that even codes can be obtained just by a description in a research paper. This definitely isn’t something to be scared of, as we will be able to concentrate more on innovation as many tasks keep getting automated day by day.

Even though arguably Apple doesn’t seem to be leading the AI race, their old slogan is still something we all should continuously draw inspiration from —

Think Different.

Some useful sources:-

Link to the Capsule Networks paper — https://arxiv.org/abs/1710.09829

CapsNet Implementation — https://github.com/naturomics/CapsNet-Tensorflow

Link to DLPaper2Code — https://arxiv.org/abs/1711.03543

Useful resources to understand Capsule Networks —

1) https://hackernoon.com/what-is-a-capsnet-or-capsule-network-2bfbe48769cc

2) “Understanding Hinton’s Capsule Networks”
https://medium.com/@pechyonkin/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b

3) “A Visual Representation of Capsule Network Computations” https://medium.com/@mike_ross/a-visual-representation-of-capsule-network-computations-83767d79e737

How it all started :p

P.S. Special thanks to some really sweet people who inspire me to create these blog posts and help me out by giving their feedback before I publish them. Really appreciate your efforts! ☺

 
Bio: Raksham Pandey is a Data Scientist in the making... Electrified by AI... Passion for Deep Learning to solve problems that matter.

Original. Reposted with permission.

Related: