0
0
Prompt Engineering / GenAIml~8 mins

When to fine-tune vs prompt engineer in Prompt Engineering / GenAI - Metrics Comparison

Choose your learning style9 modes available
Metrics & Evaluation - When to fine-tune vs prompt engineer
Which metric matters and WHY

When deciding between fine-tuning a model or prompt engineering, key metrics to watch are task accuracy, response relevance, and latency. Fine-tuning aims to improve accuracy and relevance by changing the model's knowledge, while prompt engineering tries to get better answers without changing the model. Measuring accuracy or quality of answers helps decide which approach works best.

Confusion matrix or equivalent visualization
Task: Classify user intent from text

Confusion Matrix Example:
          Predicted
          Yes   No
Actual Yes  80   20
       No   15   85

- Fine-tuning can improve these numbers by learning from more examples.
- Prompt engineering tries to reduce errors by better question phrasing.
Precision vs Recall tradeoff with examples

Fine-tuning improves both precision and recall by teaching the model new patterns. It is good when you have many examples and want consistent, high-quality results.

Prompt engineering is faster and cheaper but may only improve precision or recall slightly. It is useful when you want quick fixes or have limited data.

Example: For a customer support bot, fine-tuning can reduce missed questions (higher recall). Prompt engineering can help avoid wrong answers (higher precision) by clearer prompts.

What "good" vs "bad" metric values look like

Good: Accuracy above 85%, balanced precision and recall, fast response time.

Bad: Accuracy below 60%, very low recall (missing many correct answers), or very low precision (many wrong answers).

If prompt engineering cannot reach good metrics, fine-tuning is needed.

Common pitfalls in metrics
  • Accuracy paradox: High accuracy can be misleading if data is imbalanced.
  • Overfitting: Fine-tuned models may perform well on training data but poorly on new data.
  • Data leakage: Using test data during fine-tuning inflates metrics falsely.
  • Ignoring latency: Fine-tuning can increase response time, hurting user experience.
  • Prompt bias: Poor prompt design can hide model weaknesses.
Self-check question

Your chatbot has 98% accuracy but only 12% recall on urgent requests. Is it good for production? Why or why not?

Answer: No, because it misses most urgent requests (low recall). This can cause serious problems. You should improve recall, possibly by fine-tuning or better prompt engineering.

Key Result
Fine-tuning improves accuracy and recall by changing the model, while prompt engineering tweaks inputs for quick gains; choose based on data, cost, and desired quality.