0
0
MLOpsdevops~10 mins

Responsible AI practices in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
Responsible AI practices help ensure that AI models are fair, safe, and trustworthy. They guide how to build and deploy AI systems that respect privacy, avoid bias, and provide clear explanations.
When you want to check if your AI model treats all groups fairly before deployment
When you need to track and explain AI decisions to users or regulators
When you want to monitor AI model behavior continuously to catch errors or bias
When you must protect sensitive data used in AI training and predictions
When you want to document AI model development steps for transparency
Commands
This command runs an MLflow project that includes responsible AI checks like fairness and explainability during model training.
Terminal
mlflow run https://github.com/mlflow/mlflow-example.git -P alpha=0.5
Expected OutputExpected
2024/06/01 12:00:00 INFO mlflow.projects: === Running command 'python train.py --alpha 0.5' in run with ID '123abc' === Training model with alpha=0.5 Logging metrics and artifacts Run completed successfully
-P - Passes parameters to the MLflow project, here setting model hyperparameter alpha
Starts the MLflow tracking UI to visualize model metrics, fairness reports, and explanations collected during runs.
Terminal
mlflow ui
Expected OutputExpected
2024/06/01 12:01:00 INFO mlflow.server: Starting MLflow UI at http://127.0.0.1:5000
Serves the trained model with responsible AI features enabled, allowing safe and explainable predictions via REST API.
Terminal
mlflow models serve -m runs:/123abc/model -p 1234
Expected OutputExpected
2024/06/01 12:02:00 INFO mlflow.models: Serving model from run 123abc on port 1234
-m - Specifies the model path to serve
-p - Sets the port number for the model server
Sends a prediction request to the served model and receives an explainability report along with the prediction.
Terminal
curl -X POST http://127.0.0.1:1234/invocations -H 'Content-Type: application/json' -d '{"data": [[5.1, 3.5, 1.4, 0.2]]}'
Expected OutputExpected
{"predictions": [0], "explanations": {"feature_importance": [0.8, 0.1, 0.05, 0.05]}}
Key Concept

If you remember nothing else from responsible AI, remember: always track, explain, and monitor your AI models to ensure fairness and trust.

Code Example
MLOps
import mlflow
from mlflow.models.signature import infer_signature
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
import pandas as pd

# Load data
iris = load_iris()
X = pd.DataFrame(iris.data, columns=iris.feature_names)
y = iris.target

# Train model
model = LogisticRegression(max_iter=200)
model.fit(X, y)

# Log model with MLflow including signature for input validation
signature = infer_signature(X, model.predict(X))
with mlflow.start_run() as run:
    mlflow.sklearn.log_model(model, "model", signature=signature)
    mlflow.log_param("max_iter", 200)
    mlflow.log_metric("accuracy", model.score(X, y))
print(f"Model logged in run {run.info.run_id}")
OutputSuccess
Common Mistakes
Ignoring fairness checks during model training
This can lead to biased models that harm certain groups or users.
Include fairness metrics and tests as part of your model training and evaluation pipeline.
Not enabling explainability features when serving models
Users and stakeholders cannot understand or trust AI decisions without explanations.
Serve models with explainability tools that provide feature importance or decision reasons.
Failing to monitor models after deployment
Models can degrade or become biased over time without detection.
Set up continuous monitoring and alerting for model performance and fairness.
Summary
Run MLflow projects that include responsible AI checks during model training.
Use MLflow UI to visualize model fairness, metrics, and explanations.
Serve models with explainability enabled for trustworthy predictions.
Send prediction requests and receive explanations to understand AI decisions.