Essential Data Science Tips: How to Use One-Vs-Rest and One-Vs-One for Multi-Class Classification

Classification, as a predictive model, involves aligning each class label to examples. Algorithms designed for binary classification cannot be applied to multi-class classification problems. For such situations, heuristic methods come in handy.



By Thomas Glare, Machine Learning professional and writer.

Only a few classification models aid multi-class classification. Specific algorithms, including logistic regression and perceptron, work best with binary classification and do not support more than two classes of classification tasks. The best alternative for solving multi-class classification problems is splitting the multi-class datasets into multiple binary assemblies of data that can fit the binary classification model.

Algorithms used in binary classification problems cannot work with multi-class tasks. Therefore, heuristic methods, such as one-vs-one and one-vs-rest, are used to split multi-class problems into multiple binary datasets and train the binary classification model.

 

Binary vs. Multi-Class Classification

 

Classification problems are common in machine learning. In most cases, developers prefer using a supervised machine-learning approach to predict class tables for a given dataset. Unlike regression, classification involves designing the classifier model and training it to input and categorize the test dataset. For that, you can divide the dataset into either binary or multi-class modules.

As the name suggests, binary classification involves solving a problem with only two class labels. This makes it easy to filter the data, apply classification algorithms, and train the model to predict outcomes. On the other hand, multi-class classification is applicable when there are more than two class labels in the input train data. The technique enables developers to categorize the test data into multiple binary class labels.

That said, while binary classification requires only one classifier model, the one used in the multi-class approach depends on the classification technique. Below are the two models of the multi-class classification algorithm.

 

One-Vs-Rest Classification Model for Multi-Class Classification

 

Also known as one-vs-all, the one-vs-rest model is a defined heuristic method that leverages a binary classification algorithm for multi-class classifications. The technique involves splitting a multi-class dataset into multiple sets of binary problems. Following this, a binary classifier is trained to handle each binary classification model with the most confident one making predictions.

For instance, with a multi-class classification problem with red, green, and blue datasets, binary classification can be categorized as follows:

  • Problem one: red vs. green/blue
  • Problem two: blue vs. green/red
  • Problem three: green vs. blue/red

The only challenge of using this model is that you should create a model for every class. The three classes require three models from the above datasets, which can be challenging for large sets of data with million rows, slow models, such as neural networks and datasets with a significant number of classes.

The one-vs-rest approach requires individual models to prognosticate the probability-like score. The class index with the largest score is then used to predict a class. As such, it is commonly used for classification algorithms that can naturally predict scores or numerical class membership such as perceptron and logistic regression.

 

One-Vs-One Classification Model for Multi-Class Classification

 

Like the one-vs-all model, the one-vs-one is another excellent heuristic method that takes advantage of the binary classification algorithm for classifying multi-class datasets. It also splits multi-class datasets into binary classification problems. However, unlike the one-vs-rest model that breaks datasets into a single binary assembly of data for every class, the one-vs-one classification model groups datasets into one data file for every class versus every other class.

For instance, taking into consideration multi-class dataset problems with four classes — blue, red, green and yellow — the one-vs-one approach splits it into the following six binary classification datasets:

Problem 1: red vs. green

Problem 2: red vs. blue

Problem 3: red vs. yellow

Problem 4: green vs. yellow

Problem 5: blue vs. green

Problem 6: blue vs. yellow

The above are more classification datasets compared to the one-vs-all approach explained before. As such, the formula used for calculating the total number of binary classification datasets becomes:

The number of classes x (number of classes – 1)/2

From the four multi-class datasets above, this formula gives the expected six binary classification problems as follows:

4 x (4 - 1)/2

(4 x 3) /2

12/2

=6

While each binary classification model, can predict a single class label, the one-vs-one strategy predicts the model with the most votes. If the binary classification models predict numerical class memberships accurately, such as probabilities, the class with the most sum score is taken as the class label.

Essentially, this model is best to support the classification of support vector machines and associated kernel-based classification algorithms. This is probably because kernel methods don’t scale in proportion to the size of the training dataset. Also, using the subsets of training data can revert this effect.

The support vector learning machine implementation within Scikit-learn facilitated by the SVC class supports the one-vs-one classification method of multi-class classification. To use this, you will have to change the settings on the “decision function shape” provision into “ovo.”

Additionally, the scikit-learn archive also allows for a separate one-vs-one classifier multi-class approach, making the one-vs-one option usable with any classifier. It enables this multi-class approach to be used with any other binary classifier, such as perceptron, logistic regression, SVM, or different classifiers supporting multi-class classification natively.

 

Bottom Line

 

As mentioned, using the one-vs-rest multi-class classification option makes it challenging to handle large datasets due to a large number of class instances. However, the one-vs-one multi-class classification option only splits the primary dataset into a single binary classification for each pair of classes.

Although the one-vs-rest approach cannot handle multiple datasets, it trains less number of classifiers, making it a faster option and often preferred. On the other hand, the one-vs-one approach is less prone to creating an imbalance in the dataset due to dominance in specific classes.

That said, which one do you think provides the best approach of the two? Share your thoughts in the comments section below.

 

Bio: Thomas Glare is a machine-learning professional who teaches developers how to get the best from modern machine learning methods and hands-on tutorials. You can find his articles on this website.

Related: