0
0
MLOpsdevops~5 mins

Compute resource management in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
When running machine learning tasks, your computer needs enough power like CPU or GPU to finish jobs fast and without errors. Compute resource management helps you control and use these resources smartly so your tasks run smoothly without wasting power or crashing.
When training a machine learning model that needs a GPU to speed up calculations.
When running multiple experiments on the same server and you want to share resources fairly.
When you want to limit how much CPU or memory a training job can use to avoid slowing down other tasks.
When you need to track which resources your model training used to optimize future runs.
When deploying models and you want to assign specific hardware to each model for better performance.
Commands
This command starts running your ML project with MLflow and assigns it to an experiment named 'compute_resource_test' to organize runs.
Terminal
mlflow run . --experiment-name compute_resource_test
Expected OutputExpected
2024/06/01 12:00:00 INFO mlflow.projects: === Run (ID '123abc') launched === 2024/06/01 12:00:01 INFO mlflow.projects: === Run (ID '123abc') succeeded ===
--experiment-name - Assigns the run to a named experiment for tracking.
This command runs the ML project and tells it to use GPU resources if available by passing the parameter 'use_gpu=true'.
Terminal
mlflow run . -P use_gpu=true
Expected OutputExpected
2024/06/01 12:05:00 INFO mlflow.projects: === Run (ID '456def') launched === 2024/06/01 12:05:01 INFO mlflow.projects: Using GPU for training 2024/06/01 12:10:00 INFO mlflow.projects: === Run (ID '456def') succeeded ===
-P - Passes parameters to the ML project to control resource usage.
This command shows the current GPU usage and processes using the GPU to help you monitor compute resources.
Terminal
nvidia-smi
Expected OutputExpected
Tue Jun 1 12:10:00 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:1E.0 Off | 0 | | N/A 55C P0 30W / 70W | 500MiB / 15079MiB | 20% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1234 C python 480MiB | +-----------------------------------------------------------------------------+
This command runs the ML project limiting it to use at most 4 CPU cores and 8192 MB of memory to avoid overloading the system.
Terminal
mlflow run . -P max_cpu=4 -P max_memory=8192
Expected OutputExpected
2024/06/01 12:15:00 INFO mlflow.projects: === Run (ID '789ghi') launched === 2024/06/01 12:15:01 INFO mlflow.projects: Limiting CPU to 4 cores and memory to 8192 MB 2024/06/01 12:20:00 INFO mlflow.projects: === Run (ID '789ghi') succeeded ===
-P - Passes resource limits as parameters to the ML project.
Key Concept

If you remember nothing else from this pattern, remember: controlling compute resources ensures your ML tasks run efficiently without crashing or slowing down other work.

Common Mistakes
Not specifying resource limits when running heavy ML jobs.
This can cause your computer to freeze or slow down because the job uses too much CPU or memory.
Always pass parameters to limit CPU and memory usage when running ML projects on shared machines.
Assuming GPU will be used without enabling it explicitly.
Your ML job might run on CPU only, making it slower if GPU is available but not requested.
Use parameters like 'use_gpu=true' to tell your ML project to use GPU resources.
Ignoring GPU usage monitoring.
You might overload the GPU or not notice if your job is actually using it.
Run 'nvidia-smi' regularly to check GPU usage and adjust your jobs accordingly.
Summary
Use 'mlflow run' with parameters to control CPU, memory, and GPU usage for ML tasks.
Check GPU usage with 'nvidia-smi' to monitor resource consumption.
Setting resource limits prevents system overload and ensures fair sharing on shared machines.