How to Dockerize Any Machine Learning Application
How can you -- an awesome Data Scientist -- also be known as an awesome software engineer? Docker. And these 3 simple steps to use it for your solutions over and over again.
By Arunn Thevapalan, Senior Data Scientist at Octave, Mentor, and Writer.
Containerize and ship them, models! (Photo by Andy Li on Unsplash)
A month in, as a fresh graduate at work, the founder of our AI startup walks to me and asks, “Arunn, I want you to be an expert in Docker. How long would you need?” Not sure what Docker is, but unable to dodge the question. Eventually, I replied, “Two weeks, 1 sprint.”
My manager, who was also around, tried interrupting to save me, but I had already done the damage, and all I had was the next two weeks.
Looking back, I was never an expert (nor am I now!), but I learned just enough to do what was required. In this article, I will tell you what’s just enough to dockerize any machine learning web applications.
What is Docker?
Docker is a tool designed to create, deploy, and run applications by using containers. A Container is a standardized software unit, in simple terms — nothing but a packaged bundle of application code and required libraries and other dependencies. A Docker Image is an executable software package that includes everything needed to run an application and becomes a Container at runtime.
It was a lot of new technical terms when I tried to understand Docker. But the idea is actually simple.
Think of it like you get a fresh mini Ubuntu machine. Then you install some packages on top of it. Then you add some code on top of it. And finally, you execute the codes to create an application. All of this happens on top of your existing machine with the operating system of your choice. All you need is to have Docker installed in it.
If you do not have Docker installed on your machine, please find instructions here to set up Docker.
Why Docker for Data Scientists?
I get it. You’re in the field of data science. You think the DevOps guys can take care of Docker. Your boss didn’t ask you to become an expert (unlike mine!).
You feel you don’t really need to understand Docker.
That’s not true, and let me tell you why.
“Not sure why it’s not working on your machine, it’s working on mine. Do you want me to have a look?”
Ever heard these words uttered at your workplace? Once you (and your team) understand Docker, nobody will ever have to utter those words again. Your code will run smoothly in Ubuntu, Windows, AWS, Azure, Google Cloud, or anywhere, as a matter of fact.
The applications you build become reproducible anywhere.
You’ll start spinning up environments much faster and distribute your applications the right way, and you’ll be saving a lot of time. You’ll (eventually) be known as a Data Scientist with software engineering best practices.
The 3 Simple Steps
As promised, I have simplified the process into 3 simple steps. Here let’s consider a use case of a diabetes prediction app, which can predict the onset of diabetes based on the diagnostic measure. This would give you an understanding of how we can approach containerization in a real-world use case scenario.
I highly recommend you go through this article in which we build this Machine Learning App from scratch in a step-by-step process using Streamlit.
Screencast of the Diabetes Prediction App by Author.
Please have a look at this GitHub repository with the complete implementation to follow along with the example. Now that we know the context, let’s tackle down our 3 steps!
1. Defining the environment
The first step is to ensure the exact required environment for the application to function properly. There are many ways to do this, but one of the simplest ideas is to define a requirements.txt file for the project.
Please have a look at all the libraries used in your code and list them down in a text file named requirements.txt. It’s a good practice to list the exact version of the library, which you can find out when you run pip freeze on the terminal of your environment. My requirements file for the diabetes prediction example looks like this,
joblib==0.16.0 numpy==1.19.1 pandas==1.1.0 pandas-profiling==2.8.0 scikit-learn==0.23.2 streamlit==0.64.0
2. Writing the Dockerfile
The idea here is we are trying to create a file named Dockerfile that can be used to build the required virtual environment for our app to run on. Think of it as our instructions manual on building the required environment on top of any system!
Let’s write our Dockerfile for the example at hand,
FROM python:3.7 EXPOSE 8501 WORKDIR /app COPY . . RUN pip install -r requirements.txt CMD streamlit run app.py
That’s it. 6 lines of code. All in sequence. Every line builds on top of the previous one. Let’s dissect the lines.
- Every Dockerfile has to start with a FROM. What follows FROM must be an already existing image (either locally on your machine or from the DockerHub repository). Since our environment is based on Python, we use python:3.7 as our base image and eventually create a new image using this Dockerfile.
- Streamlit runs on a default port of 8501. So for the app to run, it is important to expose that particular port. We use the EXPOSE command for that.
- WORKDIR sets the working directory for the application. The rest of the commands will be executed from this path.
- Here COPY command copies all of the files from your Docker client’s current directory to the working directory of the image.
- RUN command ensures that the libraries we defined in the requirements.txt are installed appropriately.
- CMD specifies what command to run within the container as it starts. Hence, streamlit run app.py ensures that the Streamlit app runs as soon as the container has spun up.
Writing Dockerfiles takes some practice, and you can’t possibly master all of the commands available unless you spend a lot of time with Docker. I recommend getting comfortable with some basic commands and referring to the docker's official documentation for everything else.
3. Building the image
Now that we have defined the Dockerfile, it’s time to build it and create an image. The idea is this image we create is the reproducible environment irrelevant to the underlying system.
docker build --tag app:1.0 .
As the name suggests build command builds the image layer by layer as defined in the Dockerfile. It’s always a good practice to tag an image with a name and version number as <name>:version.number .
The dot at the end signifies the path for the Dockerfile, which is the current directory.
Wait, I built the image, but what do I do with it? Depending on the requirements, you can share the built images on DockerHub or deploy them on the cloud, and so on. But first, now you run the image to get the container.
As the name suggests, the run command runs the specified container on the host machine. --publish 8501:8501 lets the port 8501 of the container to be mapped to the port 8501 of the host machine, while -it is needed for running interactive processes (like shell/terminal).
docker run --publish 8501:8501 -it app:1.0
Now follow the link prompted on your terminal to see the magic yourself! ;)
You did it! (Photo by Nghia Le on Unsplash)
Original. Reposted with permission.
Bio: Arunn Thevapalan is a Senior Data Scientist based in Sri Lanka with a mission to inspire enthusiasts to break in and grow in the world of data science by sharing learnings, approaches, and everything about the journey to become a successful data scientist.
- 5 Reasons Why Containers Will Rule Data Science
- Strategies of Docker Images Optimization
- Deploy a Machine Learning Pipeline to the Cloud Using a Docker Container