How to Use Azure ML for MLOps: Setup, Deployment, and Monitoring
Use
Azure ML to manage the full MLOps lifecycle by creating workspaces, training models, deploying them as web services, and monitoring performance. Azure ML provides tools like Azure ML Pipelines for automation and Model Registry for version control to streamline continuous integration and delivery of ML models.Syntax
Azure ML uses Python SDK commands to manage MLOps workflows. Key parts include:
Workspace: Connects to your Azure ML environment.Experiment: Tracks model training runs.Environment: Defines dependencies for training and deployment.Pipeline: Automates workflows like data prep, training, and deployment.Model: Registers and versions trained models.InferenceConfigandDeploymentConfig: Setup for deploying models as web services.
python
from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential # Connect to workspace ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace_name) # Create experiment experiment = ml_client.create_or_update_experiment(name="my-experiment") # Register model model = ml_client.models.create_or_update(name="my-model", path="./model.pkl") # Define environment from azure.ai.ml.entities import Environment env = Environment(name="my-env", conda_file="env.yml") # Create pipeline (simplified) from azure.ai.ml import command train_step = command(name="train", environment=env, command="python train.py") # Deploy model from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment endpoint = ManagedOnlineEndpoint(name="my-endpoint") ml_client.online_endpoints.create_or_update(endpoint) deployment = ManagedOnlineDeployment(name="blue", endpoint_name="my-endpoint", model=model, environment=env) ml_client.online_deployments.create_or_update(deployment) # Invoke endpoint response = ml_client.online_endpoints.invoke("my-endpoint", input_data)
Example
This example shows how to train a simple model, register it, deploy it as a web service, and test the deployment using Azure ML SDK.
python
from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential from azure.ai.ml.entities import Environment, Model, ManagedOnlineEndpoint, ManagedOnlineDeployment import json # Connect to Azure ML workspace ml_client = MLClient(DefaultAzureCredential(), "your-subscription-id", "your-resource-group", "your-workspace") # Register a model model = Model(path="./model.pkl", name="sample-model") ml_client.models.create_or_update(model) # Define environment env = Environment(name="sample-env", conda_file="./conda.yml") ml_client.environments.create_or_update(env) # Create or update endpoint endpoint = ManagedOnlineEndpoint(name="sample-endpoint") ml_client.online_endpoints.create_or_update(endpoint) # Deploy model to endpoint deployment = ManagedOnlineDeployment( name="blue", endpoint_name="sample-endpoint", model=model, environment=env, instance_type="Standard_DS3_v2", instance_count=1 ) ml_client.online_deployments.create_or_update(deployment) # Set traffic to deployment endpoint.traffic = {"blue": 100} ml_client.online_endpoints.create_or_update(endpoint) # Invoke endpoint input_data = json.dumps({"data": [[1.0, 2.0, 3.0, 4.0]]}) response = ml_client.online_endpoints.invoke("sample-endpoint", input_data) print("Prediction response:", response)
Output
Prediction response: {"result": [0.75]}
Common Pitfalls
- Not setting up authentication correctly: Always use
DefaultAzureCredentialor configure service principals properly. - Forgetting to register models: Without registration, deployment and version control are difficult.
- Ignoring environment dependencies: Define all packages in
conda.ymlto avoid runtime errors. - Not managing endpoint traffic: Deployments need traffic routing to serve requests correctly.
- Skipping monitoring: Use Azure ML's monitoring tools to track model performance and data drift.
python
## Wrong: Deploying without environment from azure.ai.ml.entities import ManagedOnlineDeployment deployment = ManagedOnlineDeployment( name="bad-deploy", endpoint_name="my-endpoint", model=model # Missing environment causes runtime errors ) ## Right: Include environment deployment = ManagedOnlineDeployment( name="good-deploy", endpoint_name="my-endpoint", model=model, environment=env )
Quick Reference
Key Azure ML components for MLOps:
- Workspace: Central place for all ML assets.
- Experiment: Tracks training runs.
- Model Registry: Stores and versions models.
- Pipeline: Automates workflows.
- Endpoint: Deploys models as REST services.
- Monitoring: Tracks model health and data drift.
Key Takeaways
Azure ML manages the full MLOps lifecycle from training to deployment and monitoring.
Always register models and define environments to ensure reproducibility and smooth deployment.
Use Azure ML Pipelines to automate workflows and improve collaboration.
Deploy models as managed endpoints to serve predictions via REST APIs.
Monitor deployed models regularly to detect performance degradation or data drift.