0
0
LangChainframework~10 mins

ChatPromptTemplate for conversations in LangChain - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - ChatPromptTemplate for conversations
Define Template Variables
Create ChatPromptTemplate
Input User Data
Format Prompt with Variables
Send Prompt to Language Model
Receive and Use Response
This flow shows how you define variables, create a ChatPromptTemplate, input data, format the prompt, send it to the model, and get a response.
Execution Sample
LangChain
from langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate

chat_prompt = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template("You are a helpful assistant."),
    HumanMessagePromptTemplate.from_template("Hello, {name}!")
])

formatted = chat_prompt.format_prompt(name="Alice")
print(formatted.to_messages())
This code creates a chat prompt template with a system and human message, fills in the name variable, and prints the formatted messages.
Execution Table
StepActionInput/VariableResult/Output
1Define system message templateTemplate: 'You are a helpful assistant.'SystemMessagePromptTemplate created
2Define human message templateTemplate: 'Hello, {name}!'HumanMessagePromptTemplate created
3Create ChatPromptTemplateMessages: [system, human]ChatPromptTemplate instance created
4Format prompt with name='Alice'name='Alice'Prompt filled with 'Hello, Alice!'
5Convert formatted prompt to messagesFormatted promptList of messages ready to send to model
6Print messagesMessages list[SystemMessage('You are a helpful assistant.'), HumanMessage('Hello, Alice!')]
💡 All steps complete, prompt formatted and ready for model input
Variable Tracker
VariableStartAfter Step 4Final
chat_promptNoneChatPromptTemplate instanceChatPromptTemplate instance
nameNone"Alice""Alice"
formattedNoneFormattedPrompt with messagesFormattedPrompt with messages
messagesNoneNone[SystemMessage, HumanMessage with 'Alice']
Key Moments - 3 Insights
Why do we use {name} inside the human message template?
The {name} is a placeholder variable that gets replaced with actual data ('Alice') during formatting, as shown in step 4 of the execution_table.
What does format_prompt() do exactly?
format_prompt() fills all template variables with provided values and prepares the messages for the language model, as seen in step 4 and 5.
Why do we convert the formatted prompt to messages before sending?
The language model expects a list of message objects, so formatted.to_messages() converts the filled template into that format, shown in step 5.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table, what is the output after step 4?
AChatPromptTemplate instance created
BPrompt filled with 'Hello, Alice!'
CList of messages ready to send
DSystemMessagePromptTemplate created
💡 Hint
Check the 'Result/Output' column for step 4 in the execution_table
At which step do we create the ChatPromptTemplate instance?
AStep 3
BStep 2
CStep 4
DStep 5
💡 Hint
Look for 'ChatPromptTemplate instance created' in the execution_table
If we change the name variable to 'Bob', what changes in the execution_table?
AStep 5 no longer converts to messages
BStep 3 creates a different template
CStep 4 output changes to 'Hello, Bob!'
DStep 1 system message changes
💡 Hint
Variable 'name' affects the formatted prompt output at step 4
Concept Snapshot
ChatPromptTemplate lets you create chat message templates with placeholders.
Use from_messages() to combine system and human templates.
Fill variables with format_prompt() to get ready-to-send messages.
Convert with to_messages() before sending to the language model.
This helps keep conversations dynamic and reusable.
Full Transcript
This visual execution trace shows how to use ChatPromptTemplate in langchain for conversations. First, you define message templates with placeholders like {name}. Then you create a ChatPromptTemplate combining these messages. When you have actual data, you call format_prompt() with variables to fill placeholders. This produces a formatted prompt object. You convert it to a list of messages with to_messages(), which you can send to a language model. The execution table walks through each step, showing how templates become ready messages. The variable tracker shows how variables like 'name' change from None to 'Alice'. Key moments clarify why placeholders are used and how formatting works. The quiz tests understanding of each step and variable effects. This method keeps chat prompts flexible and easy to reuse.