Adversarial Validation, Explained
This post proposes and outlines adversarial validation, a method for selecting training examples most similar to test examples and using them as a validation set, and provides a practical scenario for its usefulness.
In this second article on adversarial validation we get to the meat of the matter: what we can do when train and test sets differ. Will we be able to make a better validation set?
The problem with training examples being different from test examples is that validation won’t be any good for comparing models. That’s because validation examples originate in the training set.
What about a more expressive model, like logistic regression with polynomial features (that is, feature interactions)? They’re easy to create with scikit-learn:
This pipeline looked much better in validation than plain logistic regression, and also better thanMinMaxScaler + LR combo:
So that’s a no-brainer, right? Here are the actual leaderboard scores (from the earlier round of the tournament, using AUC):
After all, poly features do about as well as plain LR. Scaler + LR seems to be the best option.
We couldn’t tell that from validation, so it appears that we can’t trust it for selecting models and their parameters.
We’d like to have a validation set representative of the Numerai test set. To that end, we’ll take care to select examples for the validation set which are the most similar to the test set.
Specifically, we’ll run the distinguishing classifier in cross-validation mode, to get predictions for all training examples. Then we’ll see which training examples are misclassified as test and use them for validation.
To be more precise, we’ll choose a number of misclassified examples that the model was most certain about. It means that they look like test examples but in reality are training examples.
UPDATE: Now you can create 3D visualizations of your own data sets. Visit cubert.fastml.com and upload a CSV or libsvm-formatted file.
First, let’s try training a classifier to tell train from test, just like we did with the Santander data. Mechanics are the same, but instead of 0.5, we get 0.87 AUC, meaning that the model is able to classify the examples pretty well (at least in terms of AUC, which measures ordering/ranking).
By the way, there are only about 50 training examples that random forest misclassifies as test examples (assigning probability greater than 0.5). We work with what we have and mostly care about the order, though.
Cross-validation provides predictions for all the training points. Now we’d like to sort the training points by their estimated probability of being test examples.
Validation and predictions, take two
We did the ascending sort, so for validation we take a desired number of examples from the end:
The current evaluation metric for the competition is log loss. We’re not using a scaler with LR anymore because the data is already scaled. We only scale after creating poly features.
Let us note that differences between models in validation are pretty slim. Even so, the order is correct - we would choose the right model from the validation scores. Here’s the summary of results achieved for the two models:
And the private leaderboard at the end of the May round:
As you can see, our improved validation scores translate closely into the private leaderboard scores.
Bio: Zygmunt Zając likes fresh air, holding hands, and long walks on the beach. He runs FastML.com, the most popular machine learning blog in the whole wide world. Besides a variety of entertaining articles, FastML now has a machine learning job board and a tool for visualizing datasets in 3D.