KDnuggets Home » News » 2016 » May » Opinions, Interviews, Reports » Don’t Just Assume That Data Are Interval Scale ( 16:n19 )

Don’t Just Assume That Data Are Interval Scale


Is the interval scale assumption of your data justified? Research suggests that it may not be, and that applying scale transformations often improves performance.



By Geoff Webb, Monash University.

Many machine learning algorithms make an implicit assumption that numeric data are interval scale, specifically, that a unit difference between values has the same meaning irrespective of the magnitude of those values. For example, most distance measures would rate values of 19 and 21 to both be the same distance from a value of 20. However, suppose the three values are measures of miles per gallon. It may be arbitrary whether the variable in question was recorded as mile per gallon or gallons per mile. If they had been expressed as the latter, they would be 0.0526, 0.5000 and 0.0476. The value corresponding to 20 (0.5000) would be closer to the value corresponding to 21 (0.0476) than to the value corresponding to 19 (0.0526).

Data

Our studies [1] have shown that for tasks as diverse as information retrieval and clustering, applying transformations to the data, such as replacing values by their square root or natural logarithm, often improves performance, indicating that the interval scale assumption is not justified.

A weaker assumption than the interval scale assumption is that numeric data are ordinal. Under this assumption, order matters, but the magnitudes of differences in values are not specified. Hence, for ordinal data, we can assert that 21 is more similar to 20 than 19, but not that 21 is more similar to 20 than is 18. Our studies have shown that converting data to ranks often improves performance across a range of machine learning algorithms [1].

However, conversion to ranks entails a significant computational overhead if a learned model is to be applied to unseen data. Mapping a new value onto a rank in a training set is an operation of order log training set size. In consequence, it can be advantageous to use algorithms that assume only that data are ordinal scale, as do decision trees and algorithms built thereon, such as random forests.

[1] Fernando, T. L., & Webb, G. I. (2016). SimUSF: an efficient and effective similarity measure that is invariant to violations of the interval scale assumption. Data Mining and Knowledge Discovery.

Bio: Geoff Webb is a Professor of Information Technology Research in the Faculty of Information Technology at Monash University, where he heads the Centre for Data Science. He is an IEEE Fellow and received the 2013 IEEE ICDM Service Award and a 2014 Australian Research Council Discovery Outstanding Researcher Award.

Related:


Sign Up