0
0
MLOpsdevops~5 mins

ML lifecycle stages in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
Machine learning projects go through several steps from understanding the problem to deploying a working model. These steps help organize work and make sure the model solves the right problem and works well in real life.
When you want to build a model to predict customer behavior based on past data
When you need to clean and prepare data before training a model
When you want to test different models to find the best one
When you want to deploy a model so it can make predictions in a live app
When you need to monitor a model’s performance after deployment to keep it accurate
Commands
This command runs an ML project using MLflow, which helps track experiments and organize the ML lifecycle steps.
Terminal
mlflow run .
Expected OutputExpected
2024/06/01 12:00:00 INFO mlflow.projects: === Run (ID='123abc') succeeded ===
--experiment-name - Specify the experiment to track this run under
Starts the MLflow tracking UI so you can see your experiments, runs, and metrics in a web browser.
Terminal
mlflow ui
Expected OutputExpected
2024/06/01 12:00:05 INFO mlflow.server: Starting MLflow UI at http://127.0.0.1:5000
--port - Change the port where the UI runs
Deploys the trained model from a specific run so it can serve predictions via a REST API on port 1234.
Terminal
mlflow models serve -m runs:/123abc/model -p 1234
Expected OutputExpected
2024/06/01 12:00:10 INFO mlflow.models: Serving model at http://127.0.0.1:1234
-m - Specify the model path to serve
-p - Set the port for the model server
Key Concept

If you remember nothing else, remember: the ML lifecycle organizes work into clear steps from data to deployment to keep projects on track and models reliable.

Common Mistakes
Skipping data preparation before training
Poor data quality leads to bad model performance and unreliable predictions
Always clean and prepare your data carefully before training
Not tracking experiments and parameters
You lose track of what settings produced which results, making it hard to improve
Use tools like MLflow to log parameters, metrics, and models
Deploying models without testing
Untested models may fail or give wrong predictions in real use
Test models thoroughly before deployment and monitor them after
Summary
Run ML projects and track experiments with mlflow run and mlflow ui commands.
Deploy trained models as REST APIs using mlflow models serve.
Organize ML work into stages: data prep, training, testing, deployment, and monitoring.