0
0
MLOpsdevops~5 mins

Reproducible training pipelines in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
Training machine learning models often involves many steps and settings. Reproducible training pipelines help you run these steps the same way every time, so you get consistent results and can track what you did.
When you want to train a model and be sure you can repeat the exact same process later.
When you need to compare different model versions and know exactly what changed.
When you want to share your training process with teammates so they can run it too.
When you want to automate training so it runs without manual steps.
When you want to keep track of data, code, and parameters used in training.
Commands
This command runs the ML training pipeline defined in the current directory using MLflow. It ensures the same steps and parameters are used every time.
Terminal
mlflow run .
Expected OutputExpected
2024/06/01 12:00:00 INFO mlflow.projects: === Created directory /tmp/mlruns for run data === 2024/06/01 12:00:01 INFO mlflow.projects: === Running command 'python train.py --alpha 0.5 --l1_ratio 0.1' === Training model with alpha=0.5 and l1_ratio=0.1 Model training complete 2024/06/01 12:00:10 INFO mlflow.projects: === Run (ID 1234567890abcdef) succeeded ===
This command starts the MLflow tracking UI in your browser. You can see all runs, parameters, and results to compare and reproduce training.
Terminal
mlflow ui
Expected OutputExpected
2024/06/01 12:01:00 INFO mlflow.server: Starting MLflow UI at http://127.0.0.1:5000
This runs the training pipeline again but with different parameters. It shows how you can reproduce training with changes tracked.
Terminal
mlflow run . -P alpha=0.7 -P l1_ratio=0.2
Expected OutputExpected
2024/06/01 12:02:00 INFO mlflow.projects: === Running command 'python train.py --alpha 0.7 --l1_ratio 0.2' === Training model with alpha=0.7 and l1_ratio=0.2 Model training complete 2024/06/01 12:02:10 INFO mlflow.projects: === Run (ID abcdef1234567890) succeeded ===
-P - Set a parameter value for the run
Key Concept

If you remember nothing else from this pattern, remember: reproducible pipelines let you run, track, and share model training exactly the same way every time.

Code Example
MLOps
import mlflow
import mlflow.sklearn
from sklearn.linear_model import ElasticNet
from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score

# Load data
X, y = load_diabetes(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Parameters
alpha = 0.5
l1_ratio = 0.1

with mlflow.start_run():
    model = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42)
    model.fit(X_train, y_train)
    predictions = model.predict(X_test)

    mse = mean_squared_error(y_test, predictions)
    r2 = r2_score(y_test, predictions)

    mlflow.log_param("alpha", alpha)
    mlflow.log_param("l1_ratio", l1_ratio)
    mlflow.log_metric("mse", mse)
    mlflow.log_metric("r2", r2)
    mlflow.sklearn.log_model(model, "model")

    print(f"Run complete with mse={mse:.4f} and r2={r2:.4f}")
OutputSuccess
Common Mistakes
Not specifying parameters explicitly when running the pipeline
This causes runs to use default or old parameters, making results inconsistent and hard to reproduce.
Always pass parameters explicitly using flags like -P to control training settings.
Not using a tracking tool like MLflow to log runs
Without tracking, you lose history of what was run and cannot compare or reproduce results easily.
Use MLflow or similar tools to log parameters, metrics, and artifacts for every run.
Changing code or data without version control
Changes outside the pipeline cause runs to differ and break reproducibility.
Keep code and data under version control and link runs to specific versions.
Summary
Use 'mlflow run .' to execute your training pipeline reproducibly with tracked parameters.
Use 'mlflow ui' to view and compare all training runs and their results.
Pass parameters explicitly with '-P' flags to control training settings and keep runs consistent.