Concept Flow - What is LangChain
User Input Text
LangChain Processes Input
Calls Language Model
Processes Model Output
Returns Final Answer
LangChain takes user text, sends it to a language model, processes the response, and returns an answer.
from langchain import LLMChain, PromptTemplate prompt = PromptTemplate(input_variables=["name"], template="Hello {name}!") chain = LLMChain(llm=llm, prompt=prompt) result = chain.run("Alice")
| Step | Action | Input | Output | Notes |
|---|---|---|---|---|
| 1 | Create PromptTemplate | name='Alice' | Template with variable {name} | Prepare prompt with placeholder |
| 2 | Create LLMChain | llm, prompt | Chain object ready | Chain links prompt and model |
| 3 | Run chain | Input: 'Alice' | Prompt: 'Hello Alice!' | Input fills prompt variable |
| 4 | Call LLM | Prompt: 'Hello Alice!' | Model generates response | Model processes prompt |
| 5 | Process output | Model response | Final result string | Chain returns answer |
| Variable | Start | After Step 1 | After Step 2 | After Step 3 | After Step 4 | Final |
|---|---|---|---|---|---|---|
| prompt | None | PromptTemplate object | PromptTemplate object | PromptTemplate object | PromptTemplate object | PromptTemplate object |
| chain | None | None | LLMChain object | LLMChain object | LLMChain object | LLMChain object |
| result | None | None | None | None | Model response string | Final result string |
LangChain connects your text input to a language model using prompts. You create a PromptTemplate with variables. LLMChain links the prompt and model. Run the chain with input to get model output. It helps build apps using language models easily.