# Tag: Hyperparameter (20)

**Mastering the Learning Rate to Speed Up Deep Learning**- Nov 6, 2018.

Figuring out the optimal set of hyperparameters can be one of the most time consuming portions of creating a machine learning model, and that’s particularly true in deep learning.**Implementing Automated Machine Learning Systems with Open Source Tools**- Oct 25, 2018.

What if you want to implement an automated machine learning pipeline of your very own, or automate particular aspects of a machine learning pipeline? Rest assured that there is no need to reinvent any wheels.**The Intuitions Behind Bayesian Optimization with Gaussian Processes**- Oct 19, 2018.

Bayesian Optimization adds a Bayesian methodology to the iterative optimizer paradigm by incorporating a prior model on the space of possible target functions. This article introduces the basic concepts and intuitions behind Bayesian Optimization with Gaussian Processes.**Beginners Ask “How Many Hidden Layers/Neurons to Use in Artificial Neural Networks?”**- Jul 16, 2018.

By the end of this article, you could at least get the idea of how these questions are answered and be able to test yourself based on simple examples.**Deep Learning Tips and Tricks**- Jul 4, 2018.

This post is a distilled collection of conversations, messages, and debates on how to optimize deep models. If you have tricks you’ve found impactful, please share them in the comments below!**Deep Quantile Regression**- Jul 3, 2018.

Most Deep Learning frameworks currently focus on giving a best estimate as defined by a loss function. Occasionally something beyond a point estimate is required to make a decision. This is where a distribution would be useful. This article will purely focus on inferring quantiles.**Batch Normalization in Neural Networks**- Jun 26, 2018.

This article explains batch normalization in a simple way. I wrote this article after what I learned from Fast.ai and deeplearning.ai.**Deep Learning Best Practices – Weight Initialization**- Jun 21, 2018.

In this blog I am going to talk about the issues related to initialization of weight matrices and ways to mitigate them. Before that, let’s just cover some basics and notations that we will be using going forward.**Improving the Performance of a Neural Network**- May 30, 2018.

There are many techniques available that could help us achieve that. Follow along to get to know them and to build your own accurate neural network.**Understanding Learning Rates and How It Improves Performance in Deep Learning**- Feb 1, 2018.

Furthermore, the learning rate affects how quickly our model can converge to a local minima (aka arrive at the best accuracy). Thus getting it right from the get go would mean lesser time for us to train the model.**Using AutoML to Generate Machine Learning Pipelines with TPOT**- Jan 29, 2018.

This post will take a different approach to constructing pipelines. Certainly the title gives away this difference: instead of hand-crafting pipelines and hyperparameter optimization, and performing model selection ourselves, we will instead automate these processes.**Managing Machine Learning Workflows with Scikit-learn Pipelines Part 3: Multiple Models, Pipelines, and Grid Searches**- Jan 24, 2018.

In this post, we will be using grid search to optimize models built from a number of different types estimators, which we will then compare and properly evaluate the best hyperparameters that each model has to offer.**Managing Machine Learning Workflows with Scikit-learn Pipelines Part 2: Integrating Grid Search**- Jan 19, 2018.

Another simple yet powerful technique we can pair with pipelines to improve performance is grid search, which attempts to optimize model hyperparameter combinations.**Is Learning Rate Useful in Artificial Neural Networks?**- Jan 15, 2018.

This article will help you understand why we need the learning rate and whether it is useful or not for training an artificial neural network. Using a very simple Python code for a single layer perceptron, the learning rate value will get changed to catch its idea.**Estimating an Optimal Learning Rate For a Deep Neural Network**- Nov 21, 2017.

This post describes a simple and powerful way to find a reasonable learning rate for your neural network.**Stop Doing Fragile Research**- Nov 17, 2017.

If you develop methods for data analysis, you might only be conducting gentle tests of your method on idealized data. This leads to “fragile research,” which breaks when released into the wild. Here, I share 3 ways to make your methods robust.**A Vision for Making Deep Learning Simple**- Sep 5, 2017.

This post introduces Deep Learning Pipelines from Databricks, a new open-source library aimed at enabling everyone to easily integrate scalable deep learning into their workflows, from machine learning practitioners to business analysts.**The Current State of Automated Machine Learning**- Jan 18, 2017.

What is automated machine learning (AutoML)? Why do we need it? What are some of the AutoML tools that are available? What does its future hold? Read this article for answers to these and other AutoML questions.**Contest Winner: Winning the AutoML Challenge with Auto-sklearn**- Aug 5, 2016.

This post is the first place prize recipient in the recent KDnuggets blog contest. Auto-sklearn is an open-source Python tool that automatically determines effective machine learning pipelines for classification and regression datasets. It is built around the successful scikit-learn library and won the recent AutoML challenge.**TPOT: A Python Tool for Automating Data Science**- May 13, 2016.

TPOT is an open-source Python data science automation tool, which operates by optimizing a series of feature preprocessors and models, in order to maximize cross-validation accuracy on data sets.