0
0
Prompt Engineering / GenAIml~8 mins

Code generation in Prompt Engineering / GenAI - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Code generation
Which metric matters for Code Generation and WHY

For code generation models, the main goal is to produce correct and useful code. Metrics like BLEU and CodeBLEU measure how closely the generated code matches reference code. However, these only check similarity, not correctness.

Therefore, functional correctness is key. This means the generated code runs without errors and produces the expected results. We often use pass@k which measures if at least one of k generated code snippets passes all tests.

In summary, functional correctness metrics matter most because they show if the code actually works, not just if it looks similar.

Confusion Matrix or Equivalent Visualization

Code generation is not a classification task, so confusion matrices don't apply directly. Instead, we use pass@k metrics.

pass@1 = Number of problems solved by the first generated code / Total problems
pass@5 = Number of problems solved by any of the 5 generated codes / Total problems
    

Example:

Total problems: 100
pass@1: 60 (60% success rate)
pass@5: 85 (85% success rate)
    

This shows how often the model generates at least one correct solution among multiple tries.

Precision vs Recall Tradeoff (or Equivalent) with Examples

In code generation, precision and recall don't apply like in classification. Instead, there is a tradeoff between generating many code options (diversity) and generating correct code (accuracy).

If the model generates only one code snippet (low diversity), it might miss the correct solution (low pass@k). If it generates many snippets (high diversity), it may include more correct ones but also more incorrect ones.

Example:

  • Generating 1 snippet: 60% pass rate
  • Generating 10 snippets: 95% pass rate

This shows generating more options increases chances of correctness but costs more computation.

What "Good" vs "Bad" Metric Values Look Like for Code Generation

Good:

  • High pass@1 (e.g., > 70%) means the first generated code is often correct.
  • High pass@5 or pass@10 (e.g., > 90%) means the model reliably produces a correct solution within a few tries.
  • Low syntax errors and runtime errors in generated code.

Bad:

  • Low pass@1 (e.g., < 30%) means the model rarely gets it right on the first try.
  • Low pass@k even for large k means the model struggles to generate any correct code.
  • High rate of code that does not compile or crashes.
Common Metrics Pitfalls in Code Generation
  • Relying only on similarity metrics: BLEU or CodeBLEU can be high even if code is incorrect or does not run.
  • Ignoring functional correctness: Code that looks good but fails tests is useless.
  • Overfitting to test cases: Models might memorize solutions instead of generalizing.
  • Data leakage: If test problems appear in training data, metrics will be misleadingly high.
  • Ignoring diversity: Generating only one code snippet can hide the model's ability to find correct solutions among multiple tries.
Self Check

Your code generation model has 98% accuracy by BLEU score but only 12% pass@1. Is it good for production? Why or why not?

Answer: No, it is not good. The high BLEU score means the generated code looks similar to reference code, but the very low pass@1 means the code rarely runs correctly on the first try. For production, functional correctness (pass@1) matters more than similarity.

Key Result
Functional correctness metrics like pass@k are key to evaluating code generation quality, as similarity scores alone can be misleading.