0
0
MLOpsdevops~5 mins

Why containers make ML deployment portable in MLOps - Why It Works

Choose your learning style9 modes available
Introduction
Deploying machine learning models can be tricky because different computers have different software and settings. Containers solve this by packaging the model and everything it needs into one neat box that works the same everywhere.
When you want to share your ML model with a teammate who uses a different computer setup
When you need to move your ML model from your laptop to a cloud server without breaking it
When you want to run your ML model on different machines without reinstalling all software
When you want to keep your ML model environment consistent during testing and production
When you want to avoid conflicts between different ML projects on the same machine
Config File - Dockerfile
Dockerfile
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . ./
CMD ["python", "serve_model.py"]

This Dockerfile creates a container image for an ML model.

FROM python:3.10-slim sets the base image with Python 3.10.

WORKDIR /app sets the working folder inside the container.

COPY requirements.txt ./ copies the file listing needed Python packages.

RUN pip install --no-cache-dir -r requirements.txt installs those packages.

COPY . ./ copies your ML model code into the container.

CMD ["python", "serve_model.py"] runs the model serving script when the container starts.

Commands
This command builds a container image named 'ml-model' with tag '1.0' using the Dockerfile in the current folder. It packages your ML model and its environment.
Terminal
docker build -t ml-model:1.0 .
Expected OutputExpected
Sending build context to Docker daemon 5.12MB Step 1/6 : FROM python:3.10-slim ---> 123abc456def Step 2/6 : WORKDIR /app ---> Using cache ---> 789def012abc Step 3/6 : COPY requirements.txt ./ ---> Using cache ---> 345ghi678jkl Step 4/6 : RUN pip install --no-cache-dir -r requirements.txt ---> Running in abc123def456 Collecting numpy Installing collected packages: numpy Successfully installed numpy-1.23.1 Removing intermediate container abc123def456 ---> 901mno234pqr Step 5/6 : COPY . ./ ---> 567stu890vwx Step 6/6 : CMD ["python", "serve_model.py"] ---> Running in def789ghi012 Removing intermediate container def789ghi012 ---> 345yz012abc34 Successfully built 345yz012abc34 Successfully tagged ml-model:1.0
-t - Assigns a name and tag to the image for easy reference
This command runs the container image 'ml-model:1.0' and maps port 5000 inside the container to port 5000 on your computer, so you can access the ML model service.
Terminal
docker run -p 5000:5000 ml-model:1.0
Expected OutputExpected
* Serving Flask app 'serve_model' * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
-p - Maps container port to host port for network access
This command sends a test request to the ML model running inside the container to get a prediction for input data [1,2,3].
Terminal
curl http://localhost:5000/predict -d '{"input": [1,2,3]}' -H 'Content-Type: application/json'
Expected OutputExpected
{"prediction": [0.5, 0.7, 0.2]}
-d - Sends data in the request body
-H - Sets the content type header to JSON
Key Concept

Containers bundle your ML model and all its software so it runs the same way on any computer.

Common Mistakes
Not including all required packages in requirements.txt
The container will miss needed software, causing the model to fail when running.
List every Python package your model needs in requirements.txt before building the container.
Forgetting to map ports with -p when running the container
You won't be able to access the ML model service from your computer.
Always use -p host_port:container_port to expose the service port.
Running the container without copying the model code inside
The container won't have your model files and will fail to start the service.
Use COPY commands in the Dockerfile to include your model code.
Summary
Use a Dockerfile to package your ML model and its environment into a container image.
Build the container image with 'docker build' to create a portable package.
Run the container with 'docker run' and map ports to access the model service anywhere.