0
0
MLOpsdevops~5 mins

What is MLOps - CLI Guide

Choose your learning style9 modes available
Introduction
MLOps helps teams build and run machine learning models smoothly. It solves problems like managing code, data, and model versions so models work well in real life.
When you want to track changes in your machine learning code and data automatically.
When you need to test if a new model version works better before using it in production.
When you want to deploy machine learning models so apps can use them without manual steps.
When you want to monitor model performance and update models easily over time.
When multiple people work together on machine learning projects and need to share results.
Commands
This command runs the machine learning project in the current folder using MLflow. It helps package code and data to reproduce results.
Terminal
mlflow run .
Expected OutputExpected
2024/06/01 12:00:00 INFO mlflow.projects: === Run (ID 1234567890abcdef) started === 2024/06/01 12:00:05 INFO mlflow.projects: === Run (ID 1234567890abcdef) succeeded ===
--experiment-name - Sets the experiment name to organize runs
Starts the MLflow web interface so you can see and compare your model runs in a browser.
Terminal
mlflow ui
Expected OutputExpected
2024/06/01 12:01:00 INFO mlflow.server: Starting MLflow UI at http://127.0.0.1:5000
--port - Changes the port where the UI runs
This command serves the saved model from a specific run so other apps can use it via a web API.
Terminal
mlflow models serve -m runs:/1234567890abcdef/model -p 1234
Expected OutputExpected
2024/06/01 12:02:00 INFO mlflow.models: Serving model at http://127.0.0.1:1234
-m - Specifies the model path to serve
-p - Sets the port for the model server
Key Concept

If you remember nothing else from MLOps, remember: it makes machine learning projects repeatable, trackable, and easy to share.

Common Mistakes
Not tracking data and code versions together
This causes confusion about which data and code produced a model, making results hard to reproduce.
Always log both data and code versions using tools like MLflow to keep everything linked.
Skipping model testing before deployment
Deploying untested models can cause apps to behave badly or give wrong predictions.
Use MLOps tools to test and compare models before deploying them to production.
Not monitoring model performance after deployment
Models can become less accurate over time if data changes, leading to poor decisions.
Set up monitoring to track model accuracy and update models when needed.
Summary
Use 'mlflow run .' to run and track machine learning projects reproducibly.
Use 'mlflow ui' to open a web interface for comparing model runs visually.
Use 'mlflow models serve' to deploy models as web services for apps to use.