- Adversarial Attacks on Explainable AI - Feb 9, 2021.
Are explainability methods black-box themselves?
- Machine learning adversarial attacks are a ticking time bomb - Jan 29, 2021.
Software developers and cyber security experts have long fought the good fight against vulnerabilities in code to defend against hackers. A new, subtle approach to maliciously targeting machine learning models has been a recent hot topic in research, but its statistical nature makes it difficult to find and patch these so-called adversarial attacks. Such threats in the real-world are becoming imminent as the adoption of machine learning spreads, and a systematic defense must be implemented.
- Adversarial Examples in Deep Learning – A Primer - Nov 20, 2020.
Bigger compute has led to increasingly impressive deep learning computer vision model SOTA results. However most of these SOTA deep learning models are brought down to their knees when making predictions on adversarial images. Read on to find out more.
- Are Computer Vision Models Vulnerable to Weight Poisoning Attacks? - Aug 17, 2020.
A recent paper has explored the possibility of influencing the predictions of a freshly trained Natural Language Processing (NLP) model by tweaking the weights re-used in its training. his result is especially interesting if it proves to transfer also to the context of Computer Vision (CV) since there, the usage of pre-trained weights is widespread.
- Adversarial Validation Overview - Feb 13, 2020.
Learn how to implement adversarial validation that builds a classifier to determine if your data is from the training or testing sets. If you can do this, then your data has issues, and your adversarial validation model can help you diagnose the problem.
- Top 10 AI, Machine Learning Research Articles to know - Jan 30, 2020.
We’ve seen many predictions for what new advances are expected in the field of AI and machine learning. Here, we review a “data set” based on what researchers were apparently studying at the turn of the decade to take a fresh glimpse into what might come to pass in 2020.
- Intro to Adversarial Machine Learning and Generative Adversarial Networks - Oct 23, 2019.
In this crash course on GANs, we explore where they fit into the pantheon of generative models, how they've changed over time, and what the future has in store for this area of machine learning.
- Cartoon: AI + Self-Driving + BBQ = ? - Jul 4, 2019.
KDnuggets Cartoon looks at what happens when AI and self-driving technology collide with the traditional summer pastime of grilling.
- Why Machine Learning is vulnerable to adversarial attacks and how to fix it - Jun 13, 2019.
Machine learning can process data imperceptible to humans to produce expected results. These inconceivable patterns are inherent in the data but may make models vulnerable to adversarial attacks. How can developers harness these features to not lose control of AI?
- ICLR 2019 highlights: Ian Goodfellow and GANs, Adversarial Examples, Reinforcement Learning, Fairness, Safety, Social Good, and all that jazz - May 27, 2019.
We provide an overview of the main themes and topics discussed at this years International Conference on Learning Representations (ICLR).
- Interpolation in Autoencoders via an Adversarial Regularizer - Mar 29, 2019.
Adversarially Constrained Autoencoder Interpolation (ACAI; Berthelot et al., 2018) is a regularization procedure that uses an adversarial strategy to create high-quality interpolations of the learned representations in autoencoders.
- Breaking neural networks with adversarial attacks - Mar 7, 2019.
We develop an intuition behind "adversarial attacks" on deep neural networks, and understand why these attacks are so successful.
- Machine Learning Security - Jan 25, 2019.
We take a look at how malicious actors can break machine learning models and what some of the best practices are when it comes to stopping them.
- Key Takeaways from AI Conference SF, Day 2: AI and Security, Adversarial Examples, Innovation - Oct 30, 2018.
Highlights and key takeaways from selected keynote sessions on day 2 of AI Conference San Francisco 2018.
- KDnuggets™ News 18:n39, Oct 17: 10 Best Mobile Apps for Data Scientist; Vote in new poll: Largest dataset you analyzed? - Oct 17, 2018.
Also: An interesting explanation of why Adversarial examples arise; 5 clean code tips to improve your productivity; Github Python Data Science; and don't forget to vote in new poll: What was the largest dataset you analyzed?
- Adversarial Examples, Explained - Oct 16, 2018.
Deep neural networks—the kind of machine learning models that have recently led to dramatic performance improvements in a wide range of applications—are vulnerable to tiny perturbations of their inputs. We investigate how to deal with these vulnerabilities.
- Deep Conversations: Lisha Li, Principal at Amplify Partners - May 3, 2018.
Mathematician Lisha Li expounds on how she thrives as a Venture Capitalist at Amplify Partners to identify, invest and nurture the right startups in Machine Learning and Distributed Systems.
Pages: 1 2
- Age of AI Conference 2018 – Day 2 Highlights - Feb 23, 2018.
Here are some of the highlights from the second day of the Age of AI Conference, February 1, at the Regency Ballroom in San Francisco.
Pages: 1 2
- Cartoon: The First Ever Self-Driving, Deep Learning Grill - Jul 15, 2017.
New KDnuggets Cartoon looks at what happens when self-driving craze collides with the traditional summer pastime of grilling.
- Top KDnuggets tweets, Feb 15-21: curated list of top #DeepLearning papers; Hill for the #DataScientist: An xkcd Story - Feb 22, 2017.
Sir Austin Bradford Hill for the #DataScientist: An xkcd Story; Attacking #machinelearning with adversarial examples; Hans Rosling: An Appreciation - Great Data Scientist, Great Human #RIP; The Most Popular Language For #MachineLearning and #DataScience Is ...
- Top arXiv Papers, January: ConvNets Advances, Wide Instead of Deep, Adversarial Networks Win, Learning to Reinforcement Learn - Feb 3, 2017.
Check out the top arXiv Papers from January, covering convolutional neural network advances, why wide may trump deep, generative adversarial networks, learning to reinforcement learn, and more.
- Domino Data Science Popup, San Francisco, Feb 22 – KDnuggets Offer - Jan 31, 2017.
Learn about the latest trends in data science applications in technology from the top experts in the industry. Register by Feb 8 and save with code KDNuggetsVIP.
- Adversarial Validation, Explained - Oct 7, 2016.
This post proposes and outlines adversarial validation, a method for selecting training examples most similar to test examples and using them as a validation set, and provides a practical scenario for its usefulness.
Pages: 1 2
- Top /r/MachineLearning Posts, March: Hugs, Deep Learning Navigation, 3D Face Capture, AlphaGo! - Apr 4, 2016.
What's huggable, adversarial images for deep learning, overview of real-time 3D face capture and reenactment, deep learning quadcopter navigation, and a whole lot of AlphaGo!
- Top /r/MachineLearning Posts, November: TensorFlow, Deep Convolutional Generative Adversarial Networks, and lolz - Dec 2, 2015.
In November on /r/MachineLearning, we've got a good laugh, a fantastic image-generating convolutional generative adversarial network, and a whole lot of Google TensorFlow.
- Deep Learning Adversarial Examples – Clarifying Misconceptions - Jul 15, 2015.
Google scientist clarifies misconceptions and myths around Deep Learning Adversarial Examples, including: they do not occur in practice, Deep Learning is more vulnerable to them, they can be easily solved, and human brains make similar mistakes.
- Why unsupervised learning is more robust to adversarial distortions - Jan 30, 2015.
Yoshua Bengio, a leading expert on Deep Learning, explains why good unsupervised learning should be much more robust to adversarial distortions than supervised learning.