Tricking Deep Learning
Deep neural networks have had remarkable success with many tasks including image recognition. Read this overview regarding deep learning trickery, and why you should be cognizant.
Watching the Trickery
Here we show the trickery as it evolves. The most important aspects to pay important to are the final predictions (bottom left) and the loss history (bottom right).
(Click to see in action)
The panels show the progress of the training from the initial state where it is fairly confident the image is a Man to a Pig. Upper left shows the image, upper right shows the changes from step to step, lower left shows the prediction (the red curve is a perfect pig) and the lower right shows the loss curve indicating how well the algorithm is optimizing the problem.
Implications
While the results might initially seem quite drastic, and it might seem logical to completely distrust any results from neural networks that is probably a bit exaggerated. Since we had access to the complete network and could train as we wanted the results are significantly more successful than they would be on a blackbox network (which is the case for most public image APIs for example).
The more important take away message is that the networks trained, even if they have been trained on millions of images, still do not really ‘understand’ the images. They ultimately are still recognizing certain combinations of features and this ‘trickery’ shows how irrelevant many of these features still are. As these networks become more complicated and are trained on even larger datasets, many of these problems will be solved or at least greatly diminished on their own. Fundamentally such sanity checks are an important step to evaluate such networks and see if they are in fact learning the correct information from images.
Additionally new approaches to training and improving these networks like Adversarial Learning can further improve networks beyond the training data. These ideas have already been successfully deployed in a number of applications and were a component that gave AlphaGo by DeepMind the upper hand.
More Examples: Panda or Porcupine?
We can apply the same process as above to the standard panda image bundled with the Inception network. Here we apply a few small (but more noticeable than with the jellyfish) changes to change the panda into a porcupine.
Making just small adjustments to an image can completely confuse even the most advanced Deep Learning algorithms
We can similarly plot the associated metrics and difference images to follow more closely what happens as these change take place.
(Click to see in action)
The panels show the progress of the training from the initial state where it is fairly confident the image is a Panda to a Porcupine. Upper left shows the image, upper right shows the changes from step to step, lower left shows the prediction (the red curve is a perfect porcupine) and the lower right shows the loss curve indicating how well the algorithm is optimizing the problem.
Man or Pig?
For this example we show a man transforming into a pig without any perceivable change to the image. If you’ve ever wished you could determine how ‘pig-like’ your friend is, now you can put a number on it: 100% pig in 300 iterations or less.
The changing from a man to a pig
Bio: Kevin Mader is the cofounder of 4Quant Ltd and a lecturer in image analysis at ETH Zurich. He focuses on bringing big data and image processing together to improve medicine.
Original. Reposted with permission.
Related: