0
0
MLOpsdevops~5 mins

Performance metric tracking in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
When you train machine learning models, you need to know how well they perform. Performance metric tracking helps you save and compare these results easily over time.
When you want to record the accuracy of a model after each training run.
When you need to compare different models to pick the best one.
When you want to monitor if your model's performance improves after tuning.
When you want to keep a history of metrics for auditing or reporting.
When you want to share model results with your team in a clear way.
Commands
This command installs MLflow, a tool that helps track machine learning experiments and metrics.
Terminal
pip install mlflow
Expected OutputExpected
Collecting mlflow Downloading mlflow-2.4.1-py3-none-any.whl (16.7 MB) Installing collected packages: mlflow Successfully installed mlflow-2.4.1
Starts the MLflow tracking server UI locally so you can see your saved metrics in a web browser.
Terminal
mlflow ui
Expected OutputExpected
2024/06/01 12:00:00 Starting MLflow UI server... Listening at: http://127.0.0.1:5000
Key Concept

If you remember nothing else from this pattern, remember: saving metrics during training lets you track and improve your models over time.

Code Example
MLOps
import mlflow
import random

with mlflow.start_run():
    accuracy = random.uniform(0.7, 0.9)
    mlflow.log_metric("accuracy", accuracy)
    print(f"Logged accuracy: {accuracy:.3f}")
OutputSuccess
Common Mistakes
Not calling mlflow.start_run() before logging metrics.
Without starting a run, MLflow does not know where to save the metrics, so they are lost.
Always wrap metric logging inside mlflow.start_run() and mlflow.end_run() or use a with block.
Logging metrics with inconsistent names or types.
This makes it hard to compare runs because the metrics do not align properly.
Use consistent metric names and numeric types for easy comparison.
Summary
Install MLflow to enable metric tracking.
Start the MLflow UI to view saved metrics in a browser.
Use mlflow.start_run() to begin a tracking session.
Log metrics with mlflow.log_metric() during training.
Review metrics in the MLflow UI to compare model performance.