0
0
Prompt Engineering / GenAIml~8 mins

Evaluation of fine-tuned models in Prompt Engineering / GenAI - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Evaluation of fine-tuned models
Which metric matters for Evaluation of fine-tuned models and WHY

When we fine-tune a model, we want to see if it learned better than before. Common metrics include accuracy for simple tasks, but often precision, recall, and F1 score matter more. These metrics tell us how well the model predicts the right answers and avoids mistakes. For tasks like text generation or classification, we also check loss to see if the model is improving during training. Choosing the right metric depends on the task and what mistakes cost more.

Confusion matrix example for fine-tuned classification model
      | Predicted Positive | Predicted Negative |
      |--------------------|--------------------|
      | True Positive (TP): 85 | False Negative (FN): 15 |
      | False Positive (FP): 10 | True Negative (TN): 90 |

      Total samples = 85 + 15 + 10 + 90 = 200

      Precision = TP / (TP + FP) = 85 / (85 + 10) = 0.8947
      Recall = TP / (TP + FN) = 85 / (85 + 15) = 0.85
      F1 Score = 2 * (Precision * Recall) / (Precision + Recall) = 0.871
    
Precision vs Recall tradeoff with examples

Precision means when the model says "yes," it is usually right. This is important when false alarms are costly. For example, in spam detection, high precision means fewer good emails marked as spam.

Recall means the model finds most of the true positives. This is important when missing a positive is bad. For example, in medical diagnosis, high recall means fewer sick patients are missed.

Fine-tuning can improve one metric but may reduce the other. We must balance them based on the task.

What "good" vs "bad" metric values look like for fine-tuned models

Good: Precision and recall above 0.85, F1 score close to 0.9 or higher, and steadily decreasing loss during training. This means the model predicts well and learns from data.

Bad: High accuracy but very low recall (e.g., recall 0.2) means the model misses many true cases. Or if loss stops improving or increases, the model may not be learning well.

Common pitfalls in evaluating fine-tuned models
  • Accuracy paradox: High accuracy can be misleading if data is unbalanced.
  • Data leakage: Using test data during training inflates metrics falsely.
  • Overfitting: Model performs well on training but poorly on new data.
  • Ignoring task needs: Using wrong metrics for the problem can hide issues.
Self-check question

Your fine-tuned model has 98% accuracy but only 12% recall on fraud detection. Is it good for production? Why or why not?

Answer: No, it is not good. Even though accuracy is high, the model misses 88% of fraud cases (low recall). This means many frauds go undetected, which is risky. For fraud detection, high recall is critical to catch as many frauds as possible.

Key Result
For fine-tuned models, balance precision and recall to ensure meaningful improvements beyond accuracy.