0
0
MLOpsdevops~5 mins

Comparing experiment runs in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
When you run machine learning experiments, you often try different settings to see which works best. Comparing experiment runs helps you find the best model by looking at their results side by side.
When you want to see which model version has the highest accuracy after training multiple times.
When you need to compare different hyperparameter settings to choose the best combination.
When you want to track improvements over time by comparing new runs with older ones.
When you want to share results with your team to decide which model to deploy.
When you want to find out if a change in data preprocessing improved the model.
Commands
This command runs an MLflow project in the current directory with a parameter alpha set to 0.1. It starts an experiment run to track results.
Terminal
mlflow run . -P alpha=0.1
Expected OutputExpected
2024/06/01 12:00:00 INFO mlflow.projects: === Run (ID 1a2b3c4d) started === 2024/06/01 12:00:10 INFO mlflow.projects: === Run (ID 1a2b3c4d) succeeded ===
-P - Set a parameter value for the run
Runs the same MLflow project but with alpha set to 0.5 to compare results with the previous run.
Terminal
mlflow run . -P alpha=0.5
Expected OutputExpected
2024/06/01 12:01:00 INFO mlflow.projects: === Run (ID 5e6f7g8h) started === 2024/06/01 12:01:10 INFO mlflow.projects: === Run (ID 5e6f7g8h) succeeded ===
-P - Set a parameter value for the run
Starts the MLflow tracking UI in your browser so you can visually compare experiment runs side by side.
Terminal
mlflow ui
Expected OutputExpected
2024/06/01 12:02:00 INFO mlflow.server: Starting MLflow UI at http://127.0.0.1:5000
Lists all experiments so you can find the experiment ID to compare runs within it.
Terminal
mlflow experiments list
Expected OutputExpected
Experiment ID Name 1 Default 2 MyExperiment
Lists all runs under experiment ID 1 so you can see their metrics and parameters for comparison.
Terminal
mlflow runs list --experiment-id 1
Expected OutputExpected
Run ID Status Start Time Metrics 1a2b3c4d FINISHED 2024-06-01 12:00:00 accuracy=0.85 5e6f7g8h FINISHED 2024-06-01 12:01:00 accuracy=0.90
--experiment-id - Specify which experiment's runs to list
Key Concept

If you remember nothing else from this pattern, remember: comparing experiment runs side by side helps you pick the best model by looking at their results clearly.

Code Example
MLOps
import mlflow
import random

def train_model(alpha):
    accuracy = 0.8 + alpha * 0.2 + random.uniform(-0.05, 0.05)
    mlflow.log_param("alpha", alpha)
    mlflow.log_metric("accuracy", accuracy)
    print(f"Run finished with accuracy: {accuracy:.3f}")

if __name__ == "__main__":
    for alpha_value in [0.1, 0.5]:
        with mlflow.start_run():
            train_model(alpha_value)
OutputSuccess
Common Mistakes
Not setting different parameters for each run and then trying to compare them.
Without different parameters, runs look the same and you can't tell which setting is better.
Always set different parameters using -P flag when running experiments to create meaningful comparisons.
Not starting the MLflow UI and trying to compare runs only by reading logs.
Logs are hard to read and compare; the UI shows metrics and parameters side by side clearly.
Run 'mlflow ui' to open the tracking interface and compare runs visually.
Summary
Run experiments with different parameters using 'mlflow run . -P parameter=value' to track variations.
Start the MLflow UI with 'mlflow ui' to visually compare experiment runs side by side.
List experiments and runs using 'mlflow experiments list' and 'mlflow runs list --experiment-id' to review results.