0
0
LangchainHow-ToBeginner ยท 3 min read

How to Use LLMChain in Langchain: Simple Guide

Use LLMChain in Langchain by creating an instance with a language model and a prompt template, then call it with input data to get the generated output. It connects your prompt and model to produce text responses easily.
๐Ÿ“

Syntax

The LLMChain requires two main parts: a language model (llm) and a prompt template (prompt). You create it by passing these to LLMChain. Then, call the run() method with input variables to get the output.

  • llm: The language model instance (e.g., OpenAI).
  • prompt: A prompt template with placeholders for inputs.
  • run(): Method to execute the chain with input data.
python
from langchain import LLMChain, PromptTemplate
from langchain.llms import OpenAI

prompt = PromptTemplate(
    input_variables=["name"],
    template="Hello, {name}!"
)

llm = OpenAI()

chain = LLMChain(llm=llm, prompt=prompt)

result = chain.run(name="Alice")
๐Ÿ’ป

Example

This example shows how to create an LLMChain that greets a user by name. It uses OpenAI's model and a prompt template with a placeholder. When run with a name, it returns a greeting message.

python
from langchain import LLMChain, PromptTemplate
from langchain.llms import OpenAI

# Define the prompt template with a placeholder for 'name'
prompt = PromptTemplate(
    input_variables=["name"],
    template="Hello, {name}! How can I help you today?"
)

# Initialize the OpenAI language model
llm = OpenAI()

# Create the LLMChain with the model and prompt
chain = LLMChain(llm=llm, prompt=prompt)

# Run the chain with input data
output = chain.run(name="Bob")
print(output)
Output
Hello, Bob! How can I help you today?
โš ๏ธ

Common Pitfalls

  • Not matching input variable names in the prompt template and run() call causes errors.
  • Forgetting to initialize the language model before passing it to LLMChain.
  • Using an incomplete prompt template without placeholders for all inputs.
  • Calling run() without required input arguments.

Always ensure your prompt variables and input keys match exactly.

python
from langchain import LLMChain, PromptTemplate
from langchain.llms import OpenAI

# Wrong: input variable 'username' does not match 'name' in template
prompt_wrong = PromptTemplate(
    input_variables=["name"],
    template="Hello, {name}!"
)

llm = OpenAI()
chain = LLMChain(llm=llm, prompt=prompt_wrong)

# This will raise an error because 'username' is not defined in the prompt
# output = chain.run(username="Alice")  # Wrong usage

# Correct usage:
output = chain.run(name="Alice")  # Matches the prompt variable
print(output)
Output
Hello, Alice!
๐Ÿ“Š

Quick Reference

Remember these key points when using LLMChain:

  • Initialize your language model (e.g., OpenAI()).
  • Create a PromptTemplate with named input variables.
  • Pass both to LLMChain.
  • Call run() with matching input arguments.
โœ…

Key Takeaways

LLMChain connects a language model and a prompt template to generate text.
Input variable names in the prompt and run() call must match exactly.
Always initialize the language model before creating the chain.
Use run() with proper inputs to get the generated output.
PromptTemplate defines how inputs are inserted into the prompt text.