0
0
MLOpsdevops~5 mins

Audit trails for model decisions in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
When you use machine learning models, it's important to keep a record of what decisions the model made and why. Audit trails help you track these decisions so you can review, explain, and improve your model over time.
When you want to understand why a model gave a certain prediction to a customer.
When you need to comply with rules that require explaining automated decisions.
When you want to track model performance changes over time by recording inputs and outputs.
When debugging model errors by reviewing past decisions and their context.
When sharing model results with team members who need to verify decisions.
Commands
This command installs MLflow, a tool that helps track machine learning experiments and decisions.
Terminal
pip install mlflow
Expected OutputExpected
Collecting mlflow Downloading mlflow-2.6.1-py3-none-any.whl (18.7 MB) Installing collected packages: mlflow Successfully installed mlflow-2.6.1
Starts the MLflow tracking server UI locally so you can view audit trails of model decisions in your browser.
Terminal
mlflow ui
Expected OutputExpected
2024/06/01 12:00:00 INFO mlflow.server: Starting MLflow server... 2024/06/01 12:00:00 INFO mlflow.server: Listening at http://127.0.0.1:5000
--port - Change the port number where the UI runs
Runs a Python script that logs model inputs, outputs, and explanations to MLflow for audit trail purposes.
Terminal
python log_model_decision.py
Expected OutputExpected
2024/06/01 12:01:00 INFO mlflow.tracking.fluent: Experiment with name 'ModelAudit' does not exist. Creating a new experiment. 2024/06/01 12:01:00 INFO mlflow.tracking.fluent: Run started with ID '1234567890abcdef' Decision logged: input={'age': 30, 'income': 50000}, prediction='approved', explanation='High income and age meet criteria' Run ended with status 'FINISHED'
Key Concept

If you remember nothing else from this pattern, remember: logging inputs, outputs, and explanations together creates a clear audit trail for every model decision.

Code Example
MLOps
import mlflow

mlflow.set_experiment('ModelAudit')

with mlflow.start_run():
    input_data = {'age': 30, 'income': 50000}
    prediction = 'approved'
    explanation = 'High income and age meet criteria'

    mlflow.log_params(input_data)
    mlflow.log_metric('prediction_score', 0.85)
    mlflow.log_text(explanation, 'explanation.txt')

    print(f"Decision logged: input={input_data}, prediction='{prediction}', explanation='{explanation}'")
OutputSuccess
Common Mistakes
Not logging the input data along with the prediction.
Without inputs, you cannot understand what caused the model's decision.
Always log the exact input features used for each prediction.
Not starting the MLflow UI before trying to view audit trails.
You won't be able to see the logged data without the UI running.
Run 'mlflow ui' in a terminal to start the server before accessing the audit trail.
Logging only the prediction without any explanation or context.
This makes it hard to explain or debug decisions later.
Include a human-readable explanation or model reasoning with each logged decision.
Summary
Install MLflow to track and log model decisions.
Run the MLflow UI to view audit trails in a web browser.
Log inputs, predictions, and explanations together for clear audit trails.