Beyond One-Hot: an exploration of categorical variables

Coding categorical variables into numbers, by assign an integer to each category ordinal coding of the machine learning algorithms. Here, we explore different ways of converting a categorical variable and their effects on the dimensionality of data.



By Will McGinnis.

In machine learning, data are king. The algorithms and models used to make predictions with the data are important, and very interesting, but ML is still subject to the idea of garbage-in-garbage-out. With that in mind, let’s look at a little subset of those input data: categorical variables.

mushrooms-cars

Categorical variables (wiki) are those that represent a fixed number of possible values, rather than a continuous number.  Each value assigns the measurement to one of those finite groups, or categories.  They differ from ordinal variables in that the distance from one category to another ought to be equal regardless of the number of categories, as opposed to ordinal variables which have some intrinsic ordering.  As an example:

  1. Ordinal: low, medium, high
  2. Categorical: Georgia, Alabama, South Carolina, … , New York

The machine learning algorithms we will later use tend to want numbers, and not strings, as their inputs so we need some method of coding to convert them.

A quick interjection: there is one other concept that will come up frequently in this post, and that is the concept of dimensionality.  In simplistic terms, it is just the number of columns in the dataset, but it has significant downstream effects on the eventual models.  At the extremes, the concept of the “curse of dimensionality” discusses that in high-dimensional spaces some things just stop working properly. Even in relatively low dimensional problems, a dataset with more dimensions requires more parameters for the model to understand, and that means more rows to reliably learn those parameters. If the number of rows in the dataset is fixed, addition of extra dimensions without adding more information for the models to learn from can have a detrimental effect on the eventual model accuracy.

To circle back to the problem at hand: we want to code categorical variables into numbers, but we are concerned about this dimensionality problem.  The obvious answer is to just assign an integer to each category (we are assuming we know all of the possible categories up front).  This is called ordinal coding.  It does not add any dimensions to the problem, but implies an order to the variable that may not actually exist.

Methodology

To find out how well this works, I put together a simple python script to test different coding methods on common datasets.  First an overview of the process:

  • We gather a dataset for a classification problem that has categorical variables
  • We use some method of coding to convert the X dataset into numeric values
  • We use scikit-learn’s cross-validation-score and a BernoulliNB() classifier to generate scores for the dataset. This is repeated 10x for each dataset and the mean of all scores is used.
  • We store the dimensionality of the dataset, mean score, and time to code the data and generate the scores.

This is all repeated for a few different datasets from the UCI dataset repository:

I tried 7 different encoding methods (descriptions of 4-7 are taken from statsmodel’s docs):

  1. Ordinal: as described above
  2. One-Hot: one column per category, with a 1 or 0 in each cell for if the row contained that column’s category
  3. Binary: first the categories are encoded as ordinal, then those integers are converted into binary code, then the digits from that binary string are split into separate columns.  This encodes the data in fewer dimensions that one-hot, but with some distortion of the distances.
  4. Sum: compares the mean of the dependent variable for a given level to the overall mean of the dependent variable over all the levels. That is, it uses contrasts between each of the first k-1 levels and level k In this example, level 1 is compared to all the others, level 2 to all the others, and level 3 to all the others.
  5. Polynomial: The coefficients taken on by polynomial coding for k=4 levels are the linear, quadratic, and cubic trends in the categorical variable. The categorical variable here is assumed to be represented by an underlying, equally spaced numeric variable. Therefore, this type of encoding is used only for ordered categorical variables with equal spacing.
  6. Backward Difference: the mean of the dependent variable for a level is compared with the mean of the dependent variable for the prior level. This type of coding may be useful for a nominal or an ordinal variable.
  7. Helmert: The mean of the dependent variable for a level is compared to the mean of the dependent variable over all previous levels. Hence, the name ‘reverse’ being sometimes applied to differentiate from forward Helmert coding.

 

Results

 

Mushroom

Coding Dimensionality Avg. Score Elapsed Time
0 Ordinal 22 0.81 3.65
1 One-Hot Encoded 117 0.81 8.19
6 Helmert Coding 117 0.84 5.43
5 Backward Difference Coding 117 0.85 7.83
3 Sum Coding 117 0.85 4.93
4 Polynomial Coding 117 0.86 6.14
2 Binary Encoded 43 0.87 3.95

 

Cars

Coding Dimensionality Avg. Score Elapsed Time
10 Sum Coding 21 0.55 1.46
13 Helmert Coding 21 0.58 1.46
7 Ordinal 6 0.64 1.47
8 One-Hot Encoded 21 0.65 1.39
11 Polynomial Coding 21 0.67 1.49
12 Backward Difference Coding 21 0.70 1.50
9 Binary Encoded 9 0.70 1.44

 

Splice

Coding Dimensionality Avg. Score Elapsed Time
14 Ordinal 61 0.68 5.11
17 Sum Coding 3465 0.92 25.90
16 Binary Encoded 134 0.94 3.35
15 One-Hot Encoded 3465 0.95 2.56

 

Conclusions

This is by no means an exhaustive study, but it seems that with decent consistency binary coding performs well, without a significant increase in dimensionality.  Ordinal, as expected, performs consistently poorly.

If you’d like to look at the source, add or suggest new datasets, or new coding methods, I’ve put everything (including datasets) up on github: https://github.com/wdm0006/categorical_encoding.

Feel free to either contribute directly there, or comment with suggestions here.

Original.

Related: