# Top 10 Data Mining Algorithms, Explained

Top 10 data mining algorithms, selected by top researchers, are explained here, including what do they do, the intuition behind the algorithm, available implementations of the algorithms, why use them, and interesting applications.

**7. AdaBoost**

**What does it do? **AdaBoost is a boosting algorithm which constructs a classifier.

As you probably remember, a classifier takes a bunch of data and attempts to predict or classify which class a new data element belongs to.

**But what’s boosting?** Boosting is an ensemble learning algorithm which takes multiple learning algorithms (e.g. decision trees) and combines them. The goal is to take an ensemble or group of weak learners and combine them to create a single strong learner.

**What’s the difference between a strong and weak learner? **A weak learner classifies with accuracy barely above chance. A popular example of a weak learner is the decision stump which is a one-level decision tree.

Alternatively…

A strong learner has much higher accuracy, and an often used example of a strong learner is SVM.

**What’s an example of AdaBoost? **Let’s start with 3 weak learners. We’re going to train them in 10 rounds on a training dataset containing patient data. The dataset contains details about the patient’s medical records.

The question is…

How can we predict whether the patient will get cancer?

Here’s how AdaBoost answers the question…

**In round 1: **AdaBoost takes a sample of the training dataset and tests to see how accurate each learner is. The end result is we find the best learner.

In addition, samples that are misclassified are given a heavier weight, so that they have a higher chance of being picked in the next round.

One more thing, the best learner is also given a weight depending on its accuracy and incorporated into the ensemble of learners (right now there’s just 1 learner).

**In round 2: **AdaBoost again attempts to look for the best learner.

And here’s the kicker:

The sample of patient training data is now influenced by the more heavily misclassified weights. In other words, previously misclassified patients have a higher chance of showing up in the sample.

Why?

It’s like getting to the second level of a video game and not having to start all over again when your character is killed. Instead, you start at level 2 and focus all your efforts on getting to level 3.

Likewise, the first learner likely classified some patients correctly. Instead of trying to classify them again, let’s focus all the efforts on getting the misclassified patients.

The best learner is again weighted and incorporated into the ensemble, misclassified patients are weighted so they have a higher chance of being picked and we rinse and repeat.

**At the end of the 10 rounds:** We’re left with an ensemble of weighted learners trained and then repeatedly retrained on misclassified data from the previous rounds.

**Is this supervised or unsupervised?** This is supervised learning, since each iteration trains the weaker learners with the labelled dataset.

**Why use AdaBoost?** AdaBoost is simple. The algorithm is relatively straight-forward to program.

In addition, it’s fast! Weak learners are generally simpler than strong learners. Being simpler means they’ll likely execute faster.

Another thing…

It’s a super elegant way to auto-tune a classifier, since each successive AdaBoost round refines the weights for each of the best learners. All you need to specify is the number of rounds.

Finally, it’s flexible and versatile. AdaBoost can incorporate any learning algorithm, and it can work with a large variety of data.

**Where is it used? **AdaBoost has a ton of implementations and variants. Here are a few:

If you like Mr. Rogers, you’ll like the next algorithm…

**8. kNN**

**What does it do? **kNN, or k-Nearest Neighbors, is a classification algorithm. However, it differs from the classifiers previously described because it’s a lazy learner.

**What’s a lazy learner?** A lazy learner doesn’t do much during the training process other than store the training data. Only when new unlabeled data is input does this type of learner look to classify.

On the other hand, an eager learner builds a classification model during training. When new unlabeled data is input, this type of learner feeds the data into the classification model.

**How does C4.5, SVM and AdaBoost fit into this? **Unlike kNN, they are all eager learners.

Here’s why:

- C4.5 builds a decision tree classification model during training.
- SVM builds a hyperplane classification model during training.
- AdaBoost builds an ensemble classification model during training.

**So what does kNN do? **kNN builds no such classification model. Instead, it just stores the labeled training data.

When new unlabeled data comes in, kNN operates in 2 basic steps:

- First, it looks at the closest labeled training data points — in other words, the k-nearest neighbors.
- Second, using the neighbors’ classes, kNN gets a better idea of how the new data should be classified.

You might be wondering…

**How does kNN figure out what’s closer?** For continuous data, kNN uses a distance metric like Euclidean distance. The choice of distance metric largely depends on the data. Some even suggest learning a distance metric based on the training data. There’s tons more details and papers on kNN distance metrics.

For discrete data, the idea is transform discrete data into continuous data. 2 examples of this are:

- Using Hamming distance as a metric for the “closeness” of 2 text strings.
- Transforming discrete data into binary features.

These 2 Stack Overflow threads have some more suggestions on dealing with discrete data:

**How does kNN classify new data when neighbors disagree?** kNN has an easy time when all neighbors are the same class. The intuition is if all the neighbors agree, then the new data point likely falls in the same class.

I’ll bet you can guess where things get hairy…

How does kNN decide the class when neighbors don’t have the same class?

2 common techniques for dealing with this are:

- Take a simple majority vote from the neighbors. Whichever class has the greatest number of votes becomes the class for the new data point.
- Take a similar vote except give a heavier weight to those neighbors that are closer. A simple way to do this is to use reciprocal distance e.g. if the neighbor is 5 units away, then weight its vote 1/5. As the neighbor gets further away, the reciprocal distance gets smaller and smaller… exactly what we want!

**Is this supervised or unsupervised?** This is supervised learning, since kNN is provided a labeled training dataset.

**Why use kNN?** Ease of understanding and implementing are 2 of the key reasons to use kNN. Depending on the distance metric, kNN can be quite accurate.

But that’s just part of the story…

Here are 5 things to watch out for:

- kNN can get very computationally expensive when trying to determine the nearest neighbors on a large dataset.
- Noisy data can throw off kNN classifications.
- Features with a larger range of values can dominate the distance metric relative to features that have a smaller range, so feature scaling is important.
- Since data processing is deferred, kNN generally requires greater storage requirements than eager classifiers.
- Selecting a good distance metric is crucial to kNN’s accuracy.

**Where is it used? **A number of kNN implementations exist:

- MATLAB k-nearest neighbor classification
- scikit-learn KNeighborsClassifier
- k-Nearest Neighbour Classification in R

Spam? Fuhgeddaboudit! Read ahead to learn about the next algorithm…

**9. Naive Bayes**

**What does it do? **Naive Bayes is not a single algorithm, but a family of classification algorithms that share one common assumption:

Every feature of the data being classified is independent of all other features given the class.

**What does independent mean?** 2 features are independent when the value of one feature has no effect on the value of another feature.

For example:

Let’s say you have a patient dataset containing features like pulse, cholesterol level, weight, height and zip code. All features would be independent if the value of all features have no effect on each other. For this dataset, it’s reasonable to assume that the patient’s height and zip code are independent, since a patient’s height has little to do with their zip code.

But let’s not stop there, are the other features independent?

Sadly, the answer is no. Here are 3 feature relationships which are not independent:

- If height increases, weight likely increases.
- If cholesterol level increases, weight likely increases.
- If cholesterol level increases, pulse likely increases as well.

In my experience, the features of a dataset are generally not all independent.

And that ties in with the next question…

**Why is it called naive? **The assumption that all features of a dataset are independent is precisely why it’s called naive — it’s generally not the case that all features are independent.

**What’s Bayes?** Thomas Bayes was an English statistician for which Bayes’ Theorem is named after. You can click on the link to find about more about Bayes’ Theorem.

In a nutshell, the theorem allows us to predict the class given a set of features using probability.

The simplified equation for classification looks something like this:

Let’s dig deeper into this…

**What does the equation mean?** The equation finds the probability of Class A given Features 1 and 2. In other words, if you see Features 1 and 2, this is the probability the data is Class A.

The equation reads: The probability of Class A given Features 1 and 2 is a fraction.

- The fraction’s numerator is the probability of Feature 1 given Class A multiplied by the probability of Feature 2 given Class A multiplied by the probability of Class A.
- The fraction’s denominator is the probability of Feature 1 multiplied by the probability of Feature 2.

**What is an example of Naive Bayes? **Below is a great example taken from a Stack Overflow thread.

Here’s the deal:

- We have a training dataset of 1,000 fruits.
- The fruit can be a Banana, Orange or Other (these are the classes).
- The fruit can be Long, Sweet or Yellow (these are the features).

What do you see in this training dataset?

- Out of 500 bananas, 400 are long, 350 are sweet and 450 are yellow.
- Out of 300 oranges, none are long, 150 are sweet and 300 are yellow.
- Out of the remaining 200 fruit, 100 are long, 150 are sweet and 50 are yellow.

If we are given the length, sweetness and color of a fruit (without knowing its class), we can now calculate the probability of it being a banana, orange or other fruit.

Suppose we are told the unknown fruit is **long, sweet **and** yellow**.

Here’s how we calculate all the probabilities in 4 steps:

**Step 1:** To calculate the probability the fruit is a banana, let’s first recognize that this looks familiar. It’s the probability of the class Banana given the features Long, Sweet and Yellow or more succinctly:

This is exactly like the equation discussed earlier.

**Step 2:** Starting with the numerator, let’s plug everything in.

Multiplying everything together (as in the equation), we get:

**Step 3:** Ignore the denominator, since it’ll be the same for all the other calculations.

**Step 4:** Do a similar calculation for the other classes:

Since the is greater than , Naive Bayes would classify this long, sweet and yellow fruit as a banana.

**Is this supervised or unsupervised?** This is supervised learning, since Naive Bayes is provided a labeled training dataset in order to construct the tables.

**Why use Naive Bayes?** As you could see in the example above, Naive Bayes involves simple arithmetic. It’s just tallying up counts, multiplying and dividing.

Once the frequency tables are calculated, classifying an unknown fruit just involves calculating the probabilities for all the classes, and then choosing the highest probability.

Despite its simplicity, Naive Bayes can be surprisingly accurate. For example, it’s been found to be effective for spam filtering.

**Where is it used? **Implementations of Naive Bayes can be found in Orange, scikit-learn, Weka and R.

Finally, check out the 10th algorithm…

**10. CART**

**What does it do? **CART stands for classification and regression trees. It is a decision tree learning technique that outputs either classification or regression trees. Like C4.5, CART is a classifier.

**Is a classification tree like a decision tree?** A classification tree is a type of decision tree. The output of a classification tree is a class.

For example, given a patient dataset, you might attempt to predict whether the patient will get cancer. The class would either be “will get cancer” or “won’t get cancer.”

**What’s a regression tree?** Unlike a classification tree which predicts a class, regression trees predict a numeric or continuous value e.g. a patient’s length of stay or the price of a smartphone.

Here’s an easy way to remember…

Classification trees output classes, regression trees output numbers.

Since we’ve already covered how decision trees are used to classify data, let’s jump right into things…

**How does this compare with C4.5?**

C4.5 | CART |
---|---|

Uses information gain to segment data during decision tree generation. | Uses Gini impurity (not to be confused with Gini coefficient). A good discussion of the differences between the impurity and coefficient is available on Stack Overflow. |

Uses a single-pass pruning process to mitigate over-fitting. | Uses the cost-complexity method of pruning. Starting at the bottom of the tree, CART evaluates the misclassification cost with the node vs. without the node. If the cost doesn’t meet a threshold, it is pruned away. |

The decision nodes can have 2 or more branches. | The decision nodes have exactly 2 branches. |

Probabilistically distributes missing values to children. | Uses surrogates to distribute the missing values to children. |

**Is this supervised or unsupervised?** CART is a supervised learning technique, since it is provided a labeled training dataset in order to construct the classification or regression tree model.

**Why use CART?** Many of the reasons you’d use C4.5 also apply to CART, since they are both decision tree learning techniques. Things like ease of interpretation and explanation also apply to CART as well.

Like C4.5, they are also quite fast, quite popular and the output is human readable.

**Where is it used?** scikit-learn implements CART in their decision tree classifier. R’s tree package has an implementation of CART. Weka and MATLAB also have implementations.

### Interesting Resources

- Apriori algorithm for Data Mining – made simple
- What Is Google PageRank and How Is It Earned and Transferred?
- 2 main differences between classification and regression trees
- AdaBoost Tutorial
- Ton of References

### Now it’s your turn…

Now that I’ve shared my thoughts and research around these data mining algorithms, **I want to turn it over to you.**

- Are you going to give data mining a try?
- Which data mining algorithms have you heard of but weren’t on the list?
- Or maybe you have a question about an algorithm?

Let me know what you think by leaving a comment.

**Bio: Raymond Li** is a software engineer and data enthusiast who has been blogging at rayli.net for over a decade. He loves to learn, teach and grow. You’ll usually find him wrangling data, programming and lifehacking. Like what you've read? If so, subscribe via email to get started now!