Performance: Custom evaluation metrics
MEDIUM IMPACT
Custom evaluation metrics affect the speed and responsiveness of model output evaluation during runtime.
def fast_metric(output, reference): output_set = set(output) reference_set = set(reference) score = len(output_set & reference_set) / len(reference_set) return score
def slow_metric(output, reference): # Complex nested loops and heavy computations score = 0 for o in output: for r in reference: if o == r: score += 1 return score / len(reference)
| Pattern | DOM Operations | Reflows | Paint Cost | Verdict |
|---|---|---|---|---|
| Nested loops metric | Minimal | 0 | High due to blocking JS | [X] Bad |
| Set operations metric | Minimal | 0 | Low due to fast JS | [OK] Good |