KDnuggets Home » News » 2015 » Jan » Opinions, Interviews, Reports » Why unsupervised learning is more robust to adversarial distortions ( 15:n04 )

Why unsupervised learning is more robust to adversarial distortions


Yoshua Bengio, a leading expert on Deep Learning, explains why good unsupervised learning should be much more robust to adversarial distortions than supervised learning.



By Gregory Piatetsky, @kdnuggets, Jan 30, 2015.

The excellent post by Zack Lipton (Deep Learning's Deep Flaws)'s Deep Flaws has examined the "flaws" found in deep learning algorithms, especially how one can generate adversarial examples that can fool the algorithms. Zack argued that all machine learning algorithms are susceptible to adversarial chosen examples, and we should not be surprised that deep learning have the same weakness as logistic regression.

This post generated many comments, including an interesting observation from Yoshua Bengio, one of the leading experts on Machine Learning and Deep Learning.

He wrote "I agree with (Zack Lipton) analysis, and I am glad that you have put this discussion online."

ManifoldYoshua continued:
My conjecture is that *good* unsupervised learning should generally be much more robust to adversarial distortions because it tries to discriminate the data manifold from its surroundings, in ALL non-manifold directions (at every point on the manifold). This is in contrast with supervised learning, which only needs to worry about the directions that discriminate between the observed classes. Because the number of classes is much less than the dimensionality of the space, for image data, supervised learning is therefore highly underconstrained, leaving many directions of changed "unchecked" (i.e. to which the model is either insensitive when it should not or too sensitive in the wrong way).

Related:

Sign Up

By subscribing you accept KDnuggets Privacy Policy