0
0
ML Pythonml~20 mins

Docker containerization in ML Python - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Docker containerization
Problem:You have a simple machine learning model trained with scikit-learn. You want to share it so others can run it easily without installing Python or libraries.
Current Metrics:Model trains with 90% accuracy on test data locally.
Issue:The model runs only on your machine. Others face setup issues due to different Python versions and missing packages.
Your Task
Create a Docker container that packages the model and its environment so anyone can run the model prediction with one command.
Use Python 3.12 base image.
Include all dependencies in the Dockerfile.
Provide a simple script inside the container to load the model and predict.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
ML Python
import joblib
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier

# Train and save model
iris = load_iris()
X, y = iris.data, iris.target
model = RandomForestClassifier(random_state=42)
model.fit(X, y)
joblib.dump(model, 'model.joblib')

# prediction.py
import joblib
import sys
import numpy as np

def main():
    model = joblib.load('model.joblib')
    # Example input: 5.1 3.5 1.4 0.2
    input_data = list(map(float, sys.argv[1:5]))
    prediction = model.predict([input_data])
    print(f'Predicted class: {prediction[0]}')

if __name__ == '__main__':
    main()

# Dockerfile
FROM python:3.12-slim
WORKDIR /app
COPY model.joblib ./
COPY prediction.py ./
RUN pip install scikit-learn joblib
ENTRYPOINT ["python", "prediction.py"]
Created a Python script 'prediction.py' to load the saved model and predict from command line input.
Saved the trained model as 'model.joblib' using joblib.
Wrote a Dockerfile using python:3.12-slim base image.
Installed required packages inside the container.
Copied model and script into the container.
Set the container entrypoint to run the prediction script.
Results Interpretation

Before: Model runs only on local machine with 90% accuracy but hard to share.

After: Model runs inside Docker container with same accuracy, easy to share and run anywhere.

Docker containerization packages your ML model and environment together, solving setup issues and making sharing and deployment easy and consistent.
Bonus Experiment
Extend the Docker container to serve the model via a simple web API using Flask.
💡 Hint
Add Flask to requirements, write a small Flask app to accept input and return prediction, expose port 5000 in Dockerfile.