Bird
0
0

What will this custom metric return when evaluating metric.evaluate(['a', 'b', 'c'], ['a', 'b', 'd'])?

medium📝 Predict Output Q5 of 15
LangChain - Evaluation and Testing
What will this custom metric return when evaluating metric.evaluate(['a', 'b', 'c'], ['a', 'b', 'd'])?

class CountErrorsMetric(BaseEvalMetric):
def evaluate(self, predictions, references):
errors = sum(p != r for p, r in zip(predictions, references))
return errors
A1
B2
C3
D0
Step-by-Step Solution
Solution:
  1. Step 1: Compare each prediction with reference

    'a'=='a' (no error), 'b'=='b' (no error), 'c'!='d' (error), total errors = 1.
  2. Step 2: Return total error count

    The method returns the sum of errors, which is 1.
  3. Final Answer:

    1 -> Option A
  4. Quick Check:

    Error count = 1 [OK]
Quick Trick: Count mismatches to get error count [OK]
Common Mistakes:
MISTAKES
  • Counting matches instead of errors
  • Returning ratio instead of count
  • Mixing prediction and reference order

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More LangChain Quizzes