0
0
LangChainframework~10 mins

Creating evaluation datasets in LangChain - Visual Walkthrough

Choose your learning style9 modes available
Concept Flow - Creating evaluation datasets
Define dataset structure
Load raw data source
Process and clean data
Split data into train/test/eval
Format data for evaluation
Save or return evaluation dataset
This flow shows how to create an evaluation dataset by defining, loading, processing, splitting, formatting, and saving data.
Execution Sample
LangChain
from langchain.evaluation import Dataset

raw_data = load_data()
dataset = Dataset.from_list(raw_data)
train_set, test_set, eval_set = dataset.split(0.8, 0.1, 0.1)
formatted_eval = eval_set.format_for_evaluation()
This code loads raw data, creates a Dataset, splits it, and formats it for evaluation.
Execution Table
StepActionInputOutputNotes
1Call load_data()NoneList of raw data itemsRaw data loaded from source
2Create Dataset from listRaw data listDataset object with all dataDataset initialized
3Split datasetDataset objectTrain, Test, Eval subsetsSplit ratios 80%,10%,10%
4Format eval subsetEval subsetFormatted evaluation dataReady for evaluation use
5Return formatted eval dataFormatted dataEvaluation dataset outputProcess complete
💡 All steps complete, evaluation dataset ready for use
Variable Tracker
VariableStartAfter Step 1After Step 2After Step 3After Step 4Final
raw_dataNoneList of raw itemsList of raw itemsList of raw itemsList of raw itemsList of raw items
datasetNoneNoneDataset objectDataset objectDataset objectDataset object
train_setNoneNoneNoneTrain subsetTrain subsetTrain subset
test_setNoneNoneNoneTest subsetTest subsetTest subset
eval_setNoneNoneNoneEval subsetEval subsetEval subset
formatted_evalNoneNoneNoneNoneFormatted eval dataFormatted eval data
Key Moments - 3 Insights
Why do we split the dataset into train, test, and eval parts?
Splitting ensures we train on one part, test on another, and evaluate on a separate set to fairly measure performance, as shown in step 3 of the execution_table.
What does formatting the evaluation data do?
Formatting prepares the data in a way the evaluation tools expect, making it usable for scoring or comparison, as seen in step 4.
Can we create an evaluation dataset without cleaning or processing raw data?
Skipping processing may cause errors or poor evaluation quality. Processing ensures data is consistent and clean before splitting and formatting.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table, what is the output after step 3?
AA single Dataset object with all data
BTrain, Test, Eval subsets
CFormatted evaluation data
DRaw data list
💡 Hint
Check the Output column for step 3 in the execution_table
According to variable_tracker, when does 'formatted_eval' get its value?
AAfter Step 2
BAfter Step 3
CAfter Step 4
DAt Start
💡 Hint
Look at the 'formatted_eval' row and see when it changes from None
If we skip splitting the dataset, how would the execution_table change?
AStep 3 would be missing or output the full dataset
BStep 4 would not exist
CStep 3 would output formatted evaluation data
DStep 1 would fail
💡 Hint
Consider what splitting does at step 3 and what happens if it's skipped
Concept Snapshot
Creating evaluation datasets in Langchain:
1. Load raw data
2. Create Dataset object
3. Split into train/test/eval
4. Format eval subset
5. Use formatted data for evaluation
Splitting ensures fair testing and evaluation.
Full Transcript
Creating evaluation datasets involves loading raw data, wrapping it in a Dataset object, splitting it into training, testing, and evaluation parts, then formatting the evaluation subset for use. This process helps measure model performance fairly by separating data for training and evaluation. The key steps include loading data, splitting with defined ratios, and formatting for evaluation tools. Variables like raw_data, dataset, and formatted_eval change state as the process moves forward. Understanding why splitting and formatting happen helps avoid confusion and ensures good evaluation results.