0
0
LangChainframework~10 mins

Automated evaluation pipelines in LangChain - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to import the LangChain evaluation module.

LangChain
from langchain.evaluation import [1]
Drag options to blanks, or click blank then click option'
Aload_evaluator
Bload_evaluator_chain
Cload_evaluation
Dload_chain
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'load_chain' which loads general chains, not evaluators.
Using 'load_evaluation' which is not a valid function.
2fill in blank
medium

Complete the code to create an evaluator chain with a language model.

LangChain
evaluator = load_evaluator_chain(llm=[1])
Drag options to blanks, or click blank then click option'
AChatOpenAI()
BGPT4All()
COpenAI()
DTextLLM()
Attempts:
3 left
💡 Hint
Common Mistakes
Using OpenAI() which is not chat-based.
Using undefined models like TextLLM().
3fill in blank
hard

Fix the error in the code to run the evaluation chain on predictions and references.

LangChain
result = evaluator.evaluate(predictions=[1], references=references)
Drag options to blanks, or click blank then click option'
Aoutputs
Bpreds
Cpredictions
Danswers
Attempts:
3 left
💡 Hint
Common Mistakes
Using variable names that don't match the parameter name.
Passing undefined variables.
4fill in blank
hard

Fill both blanks to create a dictionary comprehension that maps inputs to their evaluation scores.

LangChain
scores = {input_text: result['[1]'] for input_text, result in [2].items()}
Drag options to blanks, or click blank then click option'
Ascore
Baccuracy
Cevaluations
Dresults
Attempts:
3 left
💡 Hint
Common Mistakes
Using wrong keys like 'accuracy' which may not exist.
Using incorrect variable names for the dictionary.
5fill in blank
hard

Fill all three blanks to define a function that runs evaluation and returns the score for each input.

LangChain
def run_evaluation(data):
    evaluator = load_evaluator_chain(llm=[1])
    results = evaluator.evaluate(predictions=data['[2]'], references=data['[3]'])
    return {k: v['score'] for k, v in results.items()}
Drag options to blanks, or click blank then click option'
AChatOpenAI()
Bpredictions
Creferences
DOpenAI()
Attempts:
3 left
💡 Hint
Common Mistakes
Using wrong model instances.
Mixing up the keys for predictions and references.