# Top 10 Data Mining Algorithms, Explained

Top 10 data mining algorithms, selected by top researchers, are explained here, including what do they do, the intuition behind the algorithm, available implementations of the algorithms, why use them, and interesting applications.

**4. Apriori**

**What does it do? **The Apriori algorithm learns association rules and is applied to a database containing a large number of transactions.

**What are association rules? **Association rule learning is a data mining technique for learning correlations and relations among variables in a database.

**What’s an example of Apriori? **Let’s say we have a database full of supermarket transactions. You can think of a database as a giant spreadsheet where each row is a customer transaction and every column represents a different grocery item.

Here’s the best part:

By applying the Apriori algorithm, we can learn the grocery items that are purchased together a.k.a association rules.

The power of this is:

You can find those items that tend to be purchased together more frequently than other items — the ultimate goal being to get shoppers to buy more. Together, these items are called itemsets.

For example:

You can probably quickly see that chips + dip and chips + soda seem to frequently occur together. These are called 2-itemsets. With a large enough dataset, it will be much harder to “see” the relationships especially when you’re dealing with 3-itemsets or more. That’s precisely what Apriori helps with!

**You might be wondering how Apriori works?** Before getting into the nitty-gritty of algorithm, you’ll need to define 3 things:

- The first is the
**size**of your itemset. Do you want to see patterns for a 2-itemset, 3-itemset, etc.? - The second is your
**support**or the number of transactions containing the itemset divided by the total number of transactions. An itemset that meets the support is called a frequent itemset. - The third is your
**confidence**or the conditional probability of some item given you have certain other items in your itemset. A good example is given chips in your itemset, there is a 67% confidence of having soda also in the itemset.

The basic Apriori algorithm is a 3 step approach:

**Join.**Scan the whole database for how frequent 1-itemsets are.**Prune.**Those itemsets that satisfy the**support**and**confidence**move onto the next round for 2-itemsets.**Repeat.**This is repeated for each itemset level until we reach our previously defined**size**.

**Is this supervised or unsupervised?** Apriori is generally considered an unsupervised learning approach, since it’s often used to discover or mine for interesting patterns and relationships.

But wait, there’s more…

Apriori can also be modified to do classification based on labelled data.

**Why use Apriori?** Apriori is well understood, easy to implement and has many derivatives.

On the other hand…

The algorithm can be quite memory, space and time intensive when generating itemsets.

**Where is it used? **Plenty of implementations of Apriori are available. Some popular ones are the ARtool, Weka, and Orange.

The next algorithm was the most difficult for me to understand, look at the next algorithm…

**5. EM**

**What does it do? **In data mining, expectation-maximization (EM) is generally used as a clustering algorithm (like k-means) for knowledge discovery.

In statistics, the EM algorithm iterates and optimizes the likelihood of seeing observed data while estimating the parameters of a statistical model with unobserved variables.

OK, hang on while I explain…

I’m not a statistician, so hopefully my simplification is both correct and helps with understanding.

Here are a few concepts that will make this way easier…

**What’s a statistical model?** I see a model as something that describes how observed data is generated. For example, the grades for an exam could fit a bell curve, so the assumption that the grades are generated via a bell curve (a.k.a. normal distribution) is the model.

**Wait, what’s a distribution?** A distribution represents the probabilities for all measurable outcomes. For example, the grades for an exam could fit a normal distribution. This normal distribution represents all the probabilities of a grade.

In other words, given a grade, you can use the distribution to determine how many exam takers are expected to get that grade.

**Cool, what are the parameters of a model?** A parameter describes a distribution which is part of a model. For example, a bell curve can be described by its **mean** and **variance**.

Using the exam scenario, the distribution of grades on an exam (the measurable outcomes) followed a bell curve (this is the distribution). The mean was **85** and the variance was **100**.

So, all you need to describe a normal distribution are 2 parameters:

- The mean
- The variance

**And likelihood?** Going back to our previous bell curve example… suppose we have a bunch of grades and are told the grades follow a bell curve. However, we’re not given all the grades… only a sample.

Here’s the deal:

We don’t know the mean or variance of all the grades, but we can estimate them using the sample. The likelihood is the probability that the bell curve with estimated mean and variance results in those bunch of grades.

In other words, given a set of measurable outcomes, let’s estimate the parameters. Using these estimated parameters, the hypothetical probability of the outcomes is called likelihood.

Remember, it’s the hypothetical probability of the **existing grades**, not the probability of a **future grade**.

You’re probably wondering, what’s **probability** then?

Using the bell curve example, suppose we know the mean and variance. Then we’re told the grades follow a bell curve. The chance that we observe certain grades and how often they are observed is the probability.

In more general terms, given the parameters, let’s estimate what outcomes should be observed. That’s what probability does for us.

**Great! Now, what’s the difference between observed and unobserved data?** Observed data is the data that you saw or recorded. Unobserved data is data that is missing. There a number of reasons that the data could be missing (not recorded, ignored, etc.).

Here’s the kicker:

For data mining and clustering, what’s important to us is looking at the class of a data point as missing data. We don’t know the class, so interpreting missing data this way is crucial for applying EM to the task of clustering.

**Once again:** The EM algorithm iterates and optimizes the likelihood of seeing observed data while estimating the parameters of a statistical model with unobserved variables. Hopefully, this is way more understandable now.

The best part is…

By optimizing the likelihood, EM generates an awesome model that assigns class labels to data points — sounds like clustering to me!

**How does EM help with clustering?** EM begins by making a guess at the model parameters.

Then it follows an iterative 3-step process:

**E-step:**Based on the model parameters, it calculates the probabilities for assignments of each data point to a cluster.**M-step:**Update the model parameters based on the cluster assignments from the E-step.- Repeat until the model parameters and cluster assignments stabilize (a.k.a. convergence).

**Is this supervised or unsupervised?** Since we do not provide labeled class information, this is unsupervised learning.

**Why use EM?** A key selling point of EM is it’s simple and straight-forward to implement. In addition, not only can it optimize for model parameters, it can also iteratively make guesses about missing data.

This makes it great for clustering and generating a model with parameters. Knowing the clusters and model parameters, it’s possible to reason about what the clusters have in common and which cluster new data belongs to.

EM is not without weaknesses though…

- First, EM is fast in the early iterations, but slow in the later iterations.
- Second, EM doesn’t always find the optimal parameters and gets stuck in local optima rather than global optima.

**Where is it used?** The EM algorithm is available in Weka. R has an implementation in the mclust package. scikit-learn also has an implementation in its gmm module.

What data mining does Google do? Take a look…

**6. PageRank**

**What does it do? **PageRank is a link analysis algorithm designed to determine the relative importance of some object linked within a network of objects.

**Yikes.. what’s link analysis? **It’s a type of network analysis looking to explore the associations (a.k.a. links) among objects.

**Here’s an example: **The most prevalent example of PageRank is Google’s search engine. Although their search engine doesn’t solely rely on PageRank, it’s one of the measures Google uses to determine a web page’s importance.

Let me explain:

Web pages on the World Wide Web link to each other. If rayli.net links to a web page on CNN, a vote is added for the CNN page indicating rayli.net finds the CNN web page relevant.

And it doesn’t stop there…

rayli.net’s votes are in turn weighted by rayli.net’s importance and relevance. In other words, any web page that’s voted for rayli.net increases rayli.net’s relevance.

The bottom line?

This concept of voting and relevance is PageRank. rayli.net’s vote for CNN increases CNN’s PageRank, and the strength of rayli.net’s PageRank influences how much its vote affects CNN’s PageRank.

**What does a PageRank of 0, 1, 2, 3, etc. mean?** Although the precise meaning of a PageRank number isn’t disclosed by Google, we can get a sense of its relative meaning.

And here’s how:

You see?

It’s a bit like a popularity contest. We all have a sense of which websites are relevant and popular in our minds. PageRank is just an uber elegant way to define it.

**What other applications are there of PageRank?** PageRank was specifically designed for the World Wide Web.

Think about it:

At its core, PageRank is really just a super effective way to do link analysis.The objects being linked don’t have to be web pages.

Here are 3 innovative applications of PageRank:

- Dr Stefano Allesina, from the University of Chicago, applied PageRank to ecology to determine which species are critical for sustaining ecosystems.
- Twitter developed WTF (Who-to-Follow) which is a personalized PageRank recommendation engine about who to follow.
- Bin Jiang, from The Hong Kong Polytechnic University, used a variant of PageRank to predict human movement rates based on topographical metrics in London.

**Is this supervised or unsupervised?** PageRank is generally considered an unsupervised learning approach, since it’s often used to discover the importance or relevance of a web page.

**Why use PageRank?** Arguably, the main selling point of PageRank is its robustness due to the difficulty of getting a relevant incoming link.

Simply stated:

If you have a graph or network and want to understand relative importance, priority, ranking or relevance, give PageRank a try.

**Where is it used? **The PageRank trademark is owned by Google. However, the PageRank algorithm is actually patented by Stanford University.

You might be wondering if you can use PageRank:

I’m not a lawyer, so best to check with an actual lawyer, but you can probably use the algorithm as long as it doesn’t commercially compete against Google/Stanford.

Here are 3 implementations of PageRank:

- C++ OpenSource PageRank Implementation
- Python PageRank Implementation
- igraph – The network analysis package (R)

With our powers combined, we are…