0
0
MLOpsdevops~5 mins

Pipeline versioning and reproducibility in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
When you build machine learning pipelines, you want to make sure you can run the same steps again and get the same results. Pipeline versioning helps track changes, and reproducibility ensures your work can be repeated exactly.
When you want to share your ML pipeline with a teammate and be sure they get the same results.
When you update your pipeline and want to keep the old version for comparison.
When you need to debug why a model changed by rerunning the exact same pipeline version.
When you deploy a model and want to trace back exactly how it was created.
When you automate training and want to keep track of all pipeline runs and their versions.
Commands
This command creates a new experiment in MLflow to organize pipeline runs and their versions.
Terminal
mlflow experiments create --experiment-name ml-pipeline-versioning
Expected OutputExpected
Experiment 'ml-pipeline-versioning' created with ID 1
--experiment-name - Sets the name of the experiment to group pipeline runs
Runs the current ML pipeline code and logs the run under the experiment with a specific version name.
Terminal
mlflow run . --experiment-name ml-pipeline-versioning --run-name version_1
Expected OutputExpected
2024/06/01 12:00:00 INFO mlflow.projects: === Run (ID 'abc123') succeeded ===
--experiment-name - Specifies which experiment to log the run under
--run-name - Gives a human-readable name to this pipeline version run
Shows details of the pipeline run including parameters, metrics, and artifacts to verify reproducibility.
Terminal
mlflow runs describe abc123
Expected OutputExpected
Run ID: abc123 Status: FINISHED Parameters: {'param1': 'value1'} Metrics: {'accuracy': 0.92} Artifacts: model.pkl
Downloads the model artifact from the specific pipeline run to reproduce or deploy the exact model version.
Terminal
mlflow artifacts download -r abc123 -d ./downloaded_model
Expected OutputExpected
Downloaded artifacts to: ./downloaded_model
-r - Specifies the run ID to download artifacts from
-d - Sets the local directory to save the downloaded artifacts
Key Concept

If you remember nothing else from this pattern, remember: version every pipeline run and log all inputs and outputs to reproduce results exactly.

Code Example
MLOps
import mlflow

mlflow.set_experiment('ml-pipeline-versioning')

with mlflow.start_run(run_name='version_1') as run:
    param1 = 'value1'
    mlflow.log_param('param1', param1)
    accuracy = 0.92
    mlflow.log_metric('accuracy', accuracy)
    with open('model.pkl', 'wb') as f:
        f.write(b'Model binary data')
    mlflow.log_artifact('model.pkl')

print(f"Run ID: {run.info.run_id} logged with accuracy {accuracy}")
OutputSuccess
Common Mistakes
Not assigning unique run names or experiment names for different pipeline versions.
It becomes hard to track which run corresponds to which pipeline version, causing confusion.
Always use meaningful experiment and run names to clearly identify pipeline versions.
Not logging all parameters and artifacts during the pipeline run.
Without complete logs, you cannot reproduce the exact pipeline results later.
Ensure all inputs, parameters, metrics, and output files are logged in each run.
Overwriting previous runs or artifacts without version control.
You lose history and cannot compare or revert to earlier pipeline versions.
Keep each run separate and download artifacts by run ID to preserve versions.
Summary
Create an MLflow experiment to group pipeline runs by version.
Run the pipeline with unique run names to log parameters, metrics, and artifacts.
Use MLflow commands to inspect runs and download artifacts for exact reproduction.