0
0
LangChainframework~10 mins

Context formatting and injection in LangChain - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - Context formatting and injection
Start with raw context data
Format context into template
Inject formatted context into prompt
Send prompt to language model
Receive and process model output
This flow shows how raw context is first formatted into a template, then injected into a prompt before sending to the language model.
Execution Sample
LangChain
from langchain.prompts import PromptTemplate

context = "The sky is blue."
template = "Context: {context}\nQuestion: What color is the sky?"
prompt = PromptTemplate(input_variables=["context"], template=template)
formatted_prompt = prompt.format(context=context)
This code formats a context string into a prompt template by injecting the context into the placeholder.
Execution Table
StepActionInputOutputNotes
1Receive raw context"The sky is blue.""The sky is blue."Raw context string stored
2Define template"Context: {context}\nQuestion: What color is the sky?"Template with placeholderTemplate ready for formatting
3Create PromptTemplateinput_variables=["context"], template=templatePromptTemplate objectSetup for formatting
4Format promptcontext="The sky is blue.""Context: The sky is blue.\nQuestion: What color is the sky?"Context injected into template
5Send prompt to modelFormatted prompt stringModel output (e.g. "The sky is blue.")Model processes prompt
6Process outputModel outputFinal answerUse output as needed
7EndExecution complete
💡 All steps complete; prompt formatted and sent to model, output processed.
Variable Tracker
VariableStartAfter Step 1After Step 4Final
contextNone"The sky is blue.""The sky is blue.""The sky is blue."
templateNoneNone"Context: {context}\nQuestion: What color is the sky?""Context: {context}\nQuestion: What color is the sky?"
formatted_promptNoneNone"Context: The sky is blue.\nQuestion: What color is the sky?""Context: The sky is blue.\nQuestion: What color is the sky?"
Key Moments - 2 Insights
Why do we need to format the context before sending it to the model?
Because the model expects a complete prompt string. Formatting injects the context into the template to create that prompt, as shown in step 4 of the execution_table.
What happens if the context variable is missing when formatting?
The formatting will fail or produce an incomplete prompt because the placeholder {context} has no value, as seen in step 4 where context is required.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table, what is the output after step 4?
A"Context: The sky is blue.\nQuestion: What color is the sky?"
B"The sky is blue."
CPromptTemplate object
DModel output
💡 Hint
Check the 'Output' column for step 4 in the execution_table.
At which step is the context injected into the template?
AStep 2
BStep 4
CStep 3
DStep 5
💡 Hint
Look for the step where formatting happens in the execution_table.
If the context changes to "Grass is green.", which variable changes in variable_tracker?
Atemplate
Bformatted_prompt
CBoth B and D
Dcontext
💡 Hint
See how context and formatted_prompt depend on the context variable in variable_tracker.
Concept Snapshot
Context formatting and injection:
- Define a template with placeholders like {context}
- Use PromptTemplate to prepare formatting
- Inject actual context into template with format()
- Send formatted prompt to language model
- Receive and use model output
Key: context must be provided to fill placeholders
Full Transcript
This visual execution shows how context formatting and injection works in Langchain. First, raw context data is received. Then a template string with placeholders is defined. A PromptTemplate object is created with the template and input variables. Next, the context is injected into the template using the format method, producing a complete prompt string. This prompt is sent to the language model, which returns an output. Finally, the output is processed for use. Variables like context, template, and formatted_prompt change values as the steps progress. Key points include the necessity of formatting before sending to the model and ensuring all placeholders have values to avoid errors.