Follow Gregory Piatetsky, No. 1 on LinkedIn Top Voices in Data Science & Analytics

KDnuggets Home » News » 2018 » Sep » Opinions » Datmo: the Open Source tool for tracking and reproducible Machine Learning experiments ( 18:n37 )

Datmo: the Open Source tool for tracking and reproducible Machine Learning experiments


As a data scientist, managing environments and experiments is always hard and results in wasted time and effort with all the troubleshooting and lost work. With datmo, you can track your experiments using this common standard and not worry about reproduction of previous work.



By Shabaz Patel

As data scientists frequently training models while in grad school and at work, we’ve faced many challenges in the model building process. In particular, these are what we saw as our biggest problems:

  1. Managing libraries: Most of us have faced issues having to install the magical permutation of packages needed to run your code. Sometimes it’s TensorFlow 1.9 breaking if you upgraded CUDA to 9.2. For others, it’s solving the Rubik’s cube of PyTorch, CuDNN, and GPU drivers. There are a growing number of ever-evolving frameworks and tools for building Machine Learning models, all being developed independently — managing their interactions, in short, is a huge pain.
  2. Managing experiments: What happens when the test accuracy was higher three runs ago but I forgot what hyperparameter configuration I used? Or trying to remember which version of preprocessing resulted in the best model from the latest batch of runs? There are countless examples of needing to record experiment along with environment, code and data, and other experiment metadata but they are decoupled in the status quo.

This problem has recently gotten significant awareness in the community and here is a blog by Pete Warden from Google.

Our Solution:

These problems gave us countless headaches, and after talking to our friends, we knew we weren’t alone. We wanted something that would not only keep track of configuration and results during the experiment but also allow data scientists to reproduce any experiment with re-runs!

We initially built it as an internal solution for tracking our experiments, making them reproducible, and with easy setup of environments. As we started to grow this out, we strove towards a tool that had an open and simple interface that integrated seamlessly with the way we were already doing machine learning; generic with respect to frameworks yet powerful that it provides complete reproducibility. Basically, something we could give to our friends so that they could run their experiments with a few commands on the command line and still repeat them reliably.

After building and using it ourselves, we decided to provide it as an open source tool called datmo.

Here’s how datmo works!

After the initial project setup, all it takes is a single command to run the experiment, and another command at the end to analyze the results!

Running an experiment with datmoRunning an experiment with datmo

This all seems good but what happens when we have multiple experiments? datmo gets more use here as we can use it to compare and analyze results and rerun previous experiments at a later point in time. Here’s how you can use ***datmo ***for that!

Rerunning a previous experiment with datmoRerunning a previous experiment with datmo

Now, let’s get our hands dirty with this example. With datmo, we’ve taken the complexity out while providing a way to get everything off the ground very quickly.

Quick start:

For this example, we’re going to be showing training of a simple classifier from the classic Fisher Iris Dataset

0. Pre-Req:

Let’s first make sure we have the pre-requisites for datmo. Docker is the main pre-requisite, so let’s ensure docker is installed (and running!) before starting. You can find the instructions based on your OS here: MacOS, Windows, Ubuntu.

We can then install datmo from your terminal with the following:

 
$ pip install datmo


1. Clone this GitHub project,


2. In your project, initialize the datmo client using the CLI

 
$ cd quick-start
$ datmo init


Then, respond to the following prompts:

 
Enter name: (up to you!)

Enter description: (up to you!)


Next, you’ll be asked if you’d like to set up your environment.

Select y and choose the following options when prompted sequentially:

 
Please select one of the above environment type: **cpu**

Please select one of the above environments: **data-analytics**

Please select one of the above environment language: **py27**


3. Now, run your first experiment using the following command,

 
$ datmo run 'python script.py'


Let’s see the list of all runs,

 
$ datmo ls


4. Now let’s change the script for a new run,

We’ll change thescript.py file. Let’s uncomment the following line in the script and remove the other config dictionary,

 
# config = { 'solver': 'liblinear', 'penalty': 'l1' }


5. Now that we have updated the environment and config in our script, let’s run another experiment,

 
$ datmo run 'python script.py'


6. Once that completes, we will now have two experiments tracked, able to be rerun on any machine

 
Check for the previous runs with:
$ datmo ls

Select the earlier run-id to rerun the first experiment
$ datmo rerun <run-id>


Congrats, you’ve now successfully reproduced a previous experiment run!

Previously, this process results in wasted time and effort with all the troubleshooting and headaches! With datmo, we ran experiments, tracked them, and reran them in 4 commands. Now, you can share your experiments using this common standard and not worry about reproduction, whether it’s a teammate reproducing your work or trying to deploy a model to production. This is obviously just a small sample, but you can go out and try other flows yourself like spinning up a TensorFlow jupyter notebook in 2 minutes!

Check us out on GitHub and give us your feedback at @datmoAI ✌️

Bio: Shabaz Patel is a cofounder at Datmo, building developer tools to help make data scientists more efficient. He has built and deployed Computer Vision and NLP based algorithms in production for companies and was a researcher at the Stanford AI Lab.

Related:


Sign Up