Consider this LangChain evaluation pipeline code that runs a simple evaluation on a list of inputs.
from langchain.evaluation import EvaluationChain
from langchain.schema import Document
def simple_eval_fn(doc: Document) -> bool:
return "good" in doc.page_content
eval_chain = EvaluationChain.from_llm(llm=None, evaluation_fn=simple_eval_fn)
inputs = [Document(page_content="This is a good example."), Document(page_content="This is bad.")]
results = [eval_chain.evaluate(input_doc) for input_doc in inputs]
print(results)What will be printed?
from langchain.evaluation import EvaluationChain from langchain.schema import Document def simple_eval_fn(doc: Document) -> bool: return "good" in doc.page_content eval_chain = EvaluationChain.from_llm(llm=None, evaluation_fn=simple_eval_fn) inputs = [Document(page_content="This is a good example."), Document(page_content="This is bad.")] results = [eval_chain.evaluate(input_doc) for input_doc in inputs] print(results)
Check what the evaluation function returns for each document.
The evaluation function returns True if the word "good" is in the document content. The first document contains "good", so True. The second does not, so False.
You want to create an evaluation pipeline that uses a custom metric function returning a float score. Which code snippet is syntactically correct?
Remember how lambda functions return values in Python.
Option A correctly uses a lambda that returns 0.9. Option A has a syntax error because 'return' cannot be used inside a lambda. Options C and D return a set and list, not a float.
Given this code snippet:
from langchain.evaluation import EvaluationChain
class MyEval:
def __call__(self, doc):
return len(doc.page_content)
eval_chain = EvaluationChain.from_llm(llm=None, evaluation_fn=MyEval())
result = eval_chain.evaluate("Test input")
print(result)Why does it raise an AttributeError?
from langchain.evaluation import EvaluationChain class MyEval: def __call__(self, doc): return len(doc.page_content) eval_chain = EvaluationChain.from_llm(llm=None, evaluation_fn=MyEval()) result = eval_chain.evaluate("Test input") print(result)
Check the type of the argument passed to the evaluation function.
The evaluate method expects a Document object with a page_content attribute. Passing a string causes an AttributeError.
Consider this code:
from langchain.evaluation import EvaluationChain
from langchain.schema import Document
scores = []
def eval_fn(doc: Document) -> float:
score = len(doc.page_content) / 10
scores.append(score)
return score
eval_chain = EvaluationChain.from_llm(llm=None, evaluation_fn=eval_fn)
inputs = [Document(page_content="Hello world!"), Document(page_content="LangChain")]
results = [eval_chain.evaluate(doc) for doc in inputs]What is the final value of the list scores?
from langchain.evaluation import EvaluationChain from langchain.schema import Document scores = [] def eval_fn(doc: Document) -> float: score = len(doc.page_content) / 10 scores.append(score) return score eval_chain = EvaluationChain.from_llm(llm=None, evaluation_fn=eval_fn) inputs = [Document(page_content="Hello world!"), Document(page_content="LangChain")] results = [eval_chain.evaluate(doc) for doc in inputs]
Count the characters in each string and divide by 10.
"Hello world!" has 12 characters, so 12/10 = 1.2. "LangChain" has 9 characters, so 9/10 = 0.9.
In LangChain's EvaluationChain, what is the primary purpose of the evaluation_fn parameter?
Think about what evaluation means in this context.
The evaluation_fn is a function that receives a Document and returns a score or boolean to indicate how well the document meets criteria.