# Tag: Gradient Descent (26)

**Enabling the Deep Learning Revolution**- Dec 5, 2019.

Deep learning models are revolutionizing the business and technology world with jaw-dropping performances in one application area after another. Read this post on some of the numerous composite technologies which allow deep learning its complex nonlinearity.**Designing Your Neural Networks**- Nov 4, 2019.

Check out this step-by-step walk through of some of the more confusing aspects of neural nets to guide you to making smart decisions about your neural network architecture.**Introduction to Artificial Neural Networks**- Oct 8, 2019.

In this article, we’ll try to cover everything related to Artificial Neural Networks or ANN.**A Summary of DeepMind’s Protein Folding Upset at CASP13**- Jul 17, 2019.

Learn how DeepMind dominated the last CASP competition for advancing protein folding models. Their approach using gradient descent is today's state of the art for predicting the 3D structure of a protein knowing only its comprising amino acid compounds.**10 Gradient Descent Optimisation Algorithms + Cheat Sheet**- Jun 26, 2019.

Gradient descent is an optimization algorithm used for minimizing the cost function in various ML algorithms. Here are some common gradient descent optimisation algorithms used in the popular deep learning frameworks such as TensorFlow and Keras.**How Optimization Works**- Apr 18, 2019.

Optimization problems are naturally described in terms of costs - money, time, resources - rather than benefits. In math it's convenient to make all your problems look the same before you work out a solution, so that you can just solve it the one time.**Neural Networks with Numpy for Absolute Beginners — Part 2: Linear Regression**- Mar 7, 2019.

In this tutorial, you will learn to implement Linear Regression for prediction using Numpy in detail and also visualize how the algorithm learns epoch by epoch. In addition to this, you will explore two layer Neural Networks.**An Intuitive Introduction to Gradient Descent**- Jun 21, 2018.

This post provides a good introduction to Gradient Descent, covering the intuition, variants and choosing the learning rate.**Deep Learning in H2O using R**- Jan 22, 2018.

This article is about implementing Deep Learning (DL) using the H2O package in R. We start with a background on DL, followed by some features of H2O's DL framework, followed by an implementation using R.**The 10 Deep Learning Methods AI Practitioners Need to Apply**- Dec 13, 2017.

Deep learning emerged from that decade’s explosive computational growth as a serious contender in the field, winning many important machine learning competitions. The interest has not cooled as of 2017; today, we see deep learning mentioned in every corner of machine learning.**Understanding Objective Functions in Neural Networks**- Nov 23, 2017.

This blog post is targeted towards people who have experience with machine learning, and want to get a better intuition on the different objective functions used to train neural networks.**Neural Network Foundations, Explained: Updating Weights with Gradient Descent & Backpropagation**- Oct 25, 2017.

In neural networks, connection weights are adjusted in order to help reconcile the differences between the actual and predicted outcomes for subsequent forward passes. But how, exactly, do these weights get adjusted?**How I started with learning AI in the last 2 months**- Oct 9, 2017.

The relevance of a full stack developer will not be enough in the changing scenario of things. In the next two years, full stack will not be full stack without AI skills.**37 Reasons why your Neural Network is not working**- Aug 22, 2017.

Over the course of many debugging sessions, I’ve compiled my experience along with the best ideas around in this handy list. I hope they would be useful to you.**Train your Deep Learning model faster and sharper: Snapshot Ensembling — M models for the cost of 1**- Aug 2, 2017.

We explain a novel Snapshot Ensembling method for increasing accuracy of Deep Learning models while also reducing training time.**Optimization in Machine Learning: Robust or global minimum?**- Jun 30, 2017.

Here we discuss how convex problems are solved and optimised in machine learning/deep learning.**Machine Learning Crash Course: Part 1**- May 24, 2017.

This post, the first in a series of ML tutorials, aims to make machine learning accessible to anyone willing to learn. We’ve designed it to give you a solid understanding of how ML algorithms work as well as provide you the knowledge to harness it in your projects.**KDnuggets™ News 17:n19, May 17: Guerrilla Guide to Machine Learning with R; 5 Machine Learning Projects You Can’t Overlook**- May 17, 2017.

The Guerrilla Guide to Machine Learning with R; 5 Machine Learning Projects You Can No Longer Overlook, May; The Two Phases of Gradient Descent in Deep Learning; HDFS vs. HBase: All you need to know; Must-Know: What are common data quality issues for Big Data and how to handle them?**Top /r/MachineLearning Posts, April: Why Momentum Really Works; Machine Learning with Scikit-Learn & TensorFlow**- May 5, 2017.

Why Momentum Really Works; O'Reilly's Hands-On Machine Learning with Scikit-Learn and TensorFlow; Implemented BEGAN and saw a cute face at iteration 168k; Self-driving car course; Exploring the mysteries of Go; DeepMind Solves AGI**KDnuggets™ News 17:n17, May 3: Learn Machine Learning… in 10 Days?!? Gradient Descent, Simplified**- May 3, 2017.

How to Learn Machine Learning in 10 Days; Keep it simple! How to understand Gradient Descent algorithm; The Guerrilla Guide to Machine Learning with Python; What Data You Analyzed - KDnuggets Poll Results and Trends; Cartoon: Machine Learning - What They Think I Do**Keep it simple! How to understand Gradient Descent algorithm**- Apr 28, 2017.

In Data Science, Gradient Descent is one of the important and difficult concepts. Here we explain this concept with an example, in a very simple way. Check this out.**Learning to Learn by Gradient Descent by Gradient Descent**- Feb 2, 2017.

What if instead of hand designing an optimising algorithm (function) we*learn*it instead? That way, by training on the class of problems we’re interested in solving, we can learn an optimum optimiser for the class!**The Gentlest Introduction to Tensorflow – Part 2**- Aug 19, 2016.

Check out the second and final part of this introductory tutorial to TensorFlow.**The Gentlest Introduction to Tensorflow – Part 1**- Aug 17, 2016.

In this series of articles, we present the gentlest introduction to Tensorflow that starts off by showing how to do linear regression for a single feature problem, and expand from there.**A Concise Overview of Standard Model-fitting Methods**- May 27, 2016.

A very concise overview of 4 standard model-fitting methods, focusing on their differences: closed-form equations, gradient descent, stochastic gradient descent, and mini-batch learning.**All Machine Learning Models Have Flaws**- Mar 3, 2015.

This classic post examines what is right and wrong with different models of machine learning, including Bayesian learning, Graphical Models, Convex Loss Optimization, Statistical Learning, and more.