0
0
MLOpsdevops~5 mins

Docker for ML reproducibility in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
Machine learning projects often need the same software setup to run correctly. Docker helps by packaging your ML code, libraries, and environment into one container. This makes sure your ML work runs the same way everywhere.
When you want to share your ML model with others and ensure it runs exactly the same on their computers.
When you need to run your ML training on different machines without worrying about software differences.
When you want to keep your ML environment clean and separate from other projects on your computer.
When you want to deploy your ML model to a server or cloud and be sure it works as tested.
When you want to save the exact setup of your ML experiment for future reuse or auditing.
Config File - Dockerfile
Dockerfile
FROM python:3.10-slim

WORKDIR /app

COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY . ./

CMD ["python", "train.py"]

This Dockerfile starts from a small Python 3.10 image.

It sets the working folder inside the container to /app.

It copies the requirements.txt file and installs the Python packages listed there.

Then it copies all your ML code into the container.

Finally, it runs the training script train.py when the container starts.

Commands
This command builds a Docker image named 'ml-reproducible' with tag '1.0' from the Dockerfile in the current folder. It packages your ML environment and code.
Terminal
docker build -t ml-reproducible:1.0 .
Expected OutputExpected
Sending build context to Docker daemon 5.12MB Step 1/6 : FROM python:3.10-slim ---> 123abc456def Step 2/6 : WORKDIR /app ---> Using cache ---> 789def012abc Step 3/6 : COPY requirements.txt ./ ---> Using cache ---> 345ghi678jkl Step 4/6 : RUN pip install --no-cache-dir -r requirements.txt ---> Running in abc123def456 Collecting numpy Installing collected packages: numpy Successfully installed numpy-1.24.2 Removing intermediate container abc123def456 ---> 901mno234pqr Step 5/6 : COPY . ./ ---> 567stu890vwx Step 6/6 : CMD ["python", "train.py"] ---> Running in def789ghi012 Removing intermediate container def789ghi012 ---> 345yz012abc Successfully built 345yz012abc Successfully tagged ml-reproducible:1.0
-t - Assigns a name and tag to the image for easy reference
This command runs the Docker container from the image you built. It starts the ML training script inside the container. The --rm flag removes the container after it finishes.
Terminal
docker run --rm ml-reproducible:1.0
Expected OutputExpected
Training started... Epoch 1/10 Loss: 0.45 Epoch 10/10 Loss: 0.05 Training complete.
--rm - Automatically removes the container after it stops to keep your system clean
This command lists all Docker images on your system so you can see the 'ml-reproducible:1.0' image you created.
Terminal
docker images
Expected OutputExpected
REPOSITORY TAG IMAGE ID CREATED SIZE ml-reproducible 1.0 345yz012abc 2 minutes ago 150MB
Key Concept

If you remember nothing else from this pattern, remember: Docker packages your ML code and environment together so your work runs the same everywhere.

Common Mistakes
Not including all required files in the Docker image by missing COPY commands.
The container will not have your ML code or dependencies, so the training will fail.
Make sure to COPY all necessary files like your code and requirements.txt into the image.
Running the container without the --rm flag and leaving stopped containers behind.
This clutters your system with unused containers, wasting space.
Use --rm to automatically clean up containers after they finish.
Not specifying a tag when building the image, causing confusion with multiple images.
It becomes hard to know which image version you are running or updating.
Always use -t with a clear name and version tag like ml-reproducible:1.0.
Summary
Create a Dockerfile to define your ML environment and code setup.
Build a Docker image from the Dockerfile to package your ML project.
Run the Docker container to execute your ML training reproducibly.
Use docker images to verify your image is created and available.