0
0
MLOpsdevops~7 mins

Why MLOps bridges ML research and production - Why It Works

Choose your learning style9 modes available
Introduction
Machine learning projects often struggle to move from research experiments to real-world use. MLOps helps by creating clear steps and tools to take models from ideas to working software that users can rely on.
When you want to turn a machine learning experiment into a reliable app that runs every day.
When multiple people work on the same ML project and need to share code, data, and results.
When you need to update your ML model regularly without breaking the app.
When you want to track how your ML model performs over time and fix problems quickly.
When you want to automate testing and deployment of ML models to save time and avoid mistakes.
Commands
This command runs the ML project in the current folder using MLflow, which helps track experiments and manage model versions.
Terminal
mlflow run .
Expected OutputExpected
2024/06/01 12:00:00 INFO mlflow.projects: === Run (ID '123abc') succeeded ===
--experiment-name - Specify the experiment to log results under
This command starts a local server to serve the trained ML model so other apps can use it for predictions.
Terminal
mlflow models serve -m runs:/123abc/model -p 5000
Expected OutputExpected
2024/06/01 12:01:00 INFO mlflow.models: Serving model at http://127.0.0.1:5000
-m - Path to the model to serve
-p - Port number to listen on
This command launches a web interface to view all your ML experiments, runs, and model versions in one place.
Terminal
mlflow ui
Expected OutputExpected
2024/06/01 12:02:00 INFO mlflow.ui: Running UI at http://127.0.0.1:5000
Key Concept

If you remember nothing else, remember: MLOps connects ML experiments to real apps by tracking, packaging, and serving models reliably.

Code Example
MLOps
import mlflow
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

# Load data
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state=42)

# Start MLflow run
with mlflow.start_run():
    model = LogisticRegression(max_iter=200)
    model.fit(X_train, y_train)
    mlflow.sklearn.log_model(model, "model")
    accuracy = model.score(X_test, y_test)
    mlflow.log_metric("accuracy", accuracy)
    print(f"Logged model with accuracy: {accuracy:.2f}")
OutputSuccess
Common Mistakes
Skipping experiment tracking and running models manually.
This causes confusion about which model version is best and makes reproducing results hard.
Always use tools like MLflow to log experiments and model versions automatically.
Serving models without testing the API endpoint.
The app may fail to get predictions if the model server is not running or misconfigured.
After starting the model server, test the endpoint with sample data before integrating.
Summary
Use MLflow commands to run experiments, serve models, and view results in a web UI.
Track every model version and metric to know which model works best.
Serve models as APIs so apps can get predictions easily and reliably.