- Improving model performance through human participation - Apr 23, 2021.
Certain industries, such as medicine and finance, are sensitive to false positives. Using human input in the model inference loop can increase the final precision and recall. Here, we describe how to incorporate human feedback at inference time, so that Machines + Humans = Higher Precision & Recall.
- Metric Matters, Part 1: Evaluating Classification Models - Mar 16, 2021.
You have many options when choosing metrics for evaluating your machine learning models. Select the right one for your situation with this guide that considers metrics for classification models.
- Evaluating Deep Learning Models: The Confusion Matrix, Accuracy, Precision, and Recall - Feb 19, 2021.
This tutorial discusses the confusion matrix, and how the precision, recall and accuracy are calculated, and how they relate to evaluating deep learning models.
- How to Evaluate the Performance of Your Machine Learning Model - Sep 3, 2020.
You can train your supervised machine learning models all day long, but unless you evaluate its performance, you can never know if your model is useful. This detailed discussion reviews the various performance metrics you must consider, and offers intuitive explanations for what they mean and how they work.
- Idiot’s Guide to Precision, Recall, and Confusion Matrix - Jan 13, 2020.
Building Machine Learning models is fun, but making sure we build the best ones is what makes a difference. Follow this quick guide to appreciate how to effectively evaluate a classification model, especially for projects where accuracy alone is not enough.
- Top KDnuggets tweets, Dec 11-17: Idiot’s Guide to Precision, Recall and Confusion - Dec 20, 2019.
Idiot's Guide to Precision, Recall and Confusion Matrix; 10 Free Must-Read Books for Machine Learning and Data Science; How to Speed up Pandas by 4x with one line of codes; #Math for Programmers teaches you the math you need to know.
- The Best Metric to Measure Accuracy of Classification Models - Dec 7, 2016.
Measuring accuracy of model for a classification problem (categorical output) is complex and time consuming compared to regression problems (continuous output). Let’s understand key testing metrics with example, for a classification problem.
Pages: 1 2
- Dealing with Unbalanced Classes, SVMs, Random Forests®, and Decision Trees in Python - Apr 29, 2016.
An overview of dealing with unbalanced classes, and implementing SVMs, Random Forests, and Decision Trees in Python.
Pages: 1 2 3
- 21 Must-Know Data Science Interview Questions and Answers - Feb 11, 2016.
KDnuggets Editors bring you the answers to 20 Questions to Detect Fake Data Scientists, including what is regularization, Data Scientists we admire, model validation, and more.
Pages: 1 2 3
- How to Balance the Five Analytic Dimensions - Sep 3, 2015.
When developing a solution one has to consider data complexity, speed, analytic complexity, accuracy & precision, and data size. It is not possible to best in all categories, but it is necessary to understand the trade-offs.