Why governance builds trust in ML systems in MLOps - Performance Analysis
We want to understand how the time needed to check ML system governance grows as the system gets bigger.
How does the effort to maintain trust through governance change with more data or models?
Analyze the time complexity of the following code snippet.
for model in ml_models:
for record in model.audit_logs:
check_compliance(record)
update_trust_score(model)
This code checks compliance for each audit record in every ML model, then updates a trust score for that model.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Checking each audit log record for every model.
- How many times: Once for each record inside each model, so total checks grow with number of models times number of records per model.
As the number of models or audit records grows, the total checks increase by multiplying these two numbers.
| Input Size (models x records) | Approx. Operations |
|---|---|
| 10 models x 10 records | 100 checks |
| 100 models x 100 records | 10,000 checks |
| 1000 models x 1000 records | 1,000,000 checks |
Pattern observation: The total work grows quickly as both models and records increase, multiplying together.
Time Complexity: O(m * r)
This means the time needed grows proportionally to the number of models times the number of audit records per model.
[X] Wrong: "Checking governance logs takes the same time no matter how many models or records there are."
[OK] Correct: More models and records mean more checks, so the time grows with their product, not stays fixed.
Understanding how governance checks scale helps you explain how to keep ML systems trustworthy as they grow, a key skill in real projects.
"What if we only checked a sample of audit records instead of all? How would the time complexity change?"