Bird
0
0

Given this custom metric code snippet, what will metric.evaluate(['yes', 'no'], ['yes', 'yes']) return?

medium📝 Predict Output Q4 of 15
LangChain - Evaluation and Testing
Given this custom metric code snippet, what will metric.evaluate(['yes', 'no'], ['yes', 'yes']) return?

class SimpleMatchMetric(BaseEvalMetric):
def evaluate(self, predictions, references):
matches = sum(p == r for p, r in zip(predictions, references))
return matches / len(references)
A2.0
B0.5
C0.0
D1.0
Step-by-Step Solution
Solution:
  1. Step 1: Count matches between predictions and references

    Comparing pairs: 'yes'=='yes' (match), 'no'=='yes' (no match), total matches = 1.
  2. Step 2: Calculate ratio of matches to total references

    Matches = 1, total references = 2, so score = 1/2 = 0.5.
  3. Final Answer:

    0.5 -> Option B
  4. Quick Check:

    Match ratio = 0.5 [OK]
Quick Trick: Count matches, divide by total references [OK]
Common Mistakes:
MISTAKES
  • Counting total predictions instead of references
  • Returning count instead of ratio
  • Mixing up prediction and reference order

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More LangChain Quizzes