0
0
MLOpsdevops~10 mins

MLflow setup and basics in MLOps - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - MLflow setup and basics
Install MLflow
Start MLflow Tracking Server
Run MLflow Experiment
Log Parameters, Metrics, Artifacts
View Results in MLflow UI
Stop MLflow Server
This flow shows the basic steps to set up MLflow: install, start server, run experiments with logging, view results, and stop server.
Execution Sample
MLOps
pip install mlflow
mlflow ui

import mlflow
with mlflow.start_run():
    mlflow.log_param("alpha", 0.5)
    mlflow.log_metric("rmse", 0.75)
This code installs MLflow, starts the UI server, and logs a parameter and metric in an experiment.
Process Table
StepActionCommand/CodeResult/Output
1Install MLflowpip install mlflowMLflow installed successfully
2Start MLflow UI servermlflow uiMLflow UI running at http://localhost:5000
3Import MLflow in scriptimport mlflowMLflow module loaded
4Start MLflow runmlflow.start_run()MLflow tracking run started
5Log parametermlflow.log_param("alpha", 0.5)Parameter 'alpha' logged with value 0.5
6Log metricmlflow.log_metric("rmse", 0.75)Metric 'rmse' logged with value 0.75
7View experiment in UIOpen http://localhost:5000Logged params and metrics visible in UI
8Stop MLflow UI serverCtrl+C in terminalMLflow UI server stopped
💡 MLflow UI server stopped by user, ending session
Status Tracker
VariableStartAfter Step 5After Step 6Final
alphaundefined0.50.50.5
rmseundefinedundefined0.750.75
Key Moments - 3 Insights
Why do we need to start the MLflow UI server separately?
The MLflow UI server (step 2) runs as a separate process to show experiment results in a web browser. Without starting it, you cannot view logged data visually.
What happens if you log a parameter or metric without starting the MLflow server?
Logging parameters or metrics (steps 5 and 6) still works locally, but you won't see them in the UI until the server is running and pointed to the correct tracking location.
Can you log multiple parameters and metrics in one run?
Yes, you can log many parameters and metrics in one experiment run. Each call adds data to the current run, as shown in steps 5 and 6.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution table, what is the output after running 'mlflow.log_param("alpha", 0.5)'?
AMLflow module loaded
BMLflow UI running at http://localhost:5000
CParameter 'alpha' logged with value 0.5
DMetric 'rmse' logged with value 0.75
💡 Hint
Check Step 5 in the execution table for the result of logging a parameter.
At which step does the MLflow UI server start running?
AStep 1
BStep 2
CStep 3
DStep 6
💡 Hint
Look for the step where the command 'mlflow ui' is executed.
If you skip step 2 (starting MLflow UI), what will you miss?
AViewing experiment results in a web browser
BInstalling MLflow
CLogging parameters
DImporting MLflow module
💡 Hint
Refer to the key moment about the purpose of the MLflow UI server.
Concept Snapshot
MLflow setup basics:
1. Install MLflow with 'pip install mlflow'.
2. Start UI server using 'mlflow ui' to view experiments.
3. In code, import mlflow, start run with 'mlflow.start_run()', and log parameters/metrics.
4. View results at http://localhost:5000.
5. Stop server with Ctrl+C when done.
Full Transcript
This visual execution guide shows how to set up MLflow for tracking machine learning experiments. First, install MLflow using pip. Then start the MLflow UI server with 'mlflow ui' to view experiment results in a browser. In your Python script, import mlflow, start a run, and log parameters and metrics using mlflow.log_param and mlflow.log_metric. These logs are saved and can be viewed in the UI. Finally, stop the MLflow server when finished. The execution table traces each step with commands and outputs, while the variable tracker shows how logged values change. Key moments clarify common confusions like why the UI server is needed. The quiz tests understanding of the setup and logging process.