0
0
LangchainHow-ToBeginner ยท 3 min read

How to Use LLM in Langchain: Simple Guide with Examples

To use a LLM in Langchain, import the LLM class (like OpenAI), create an instance with your API key, and call it with a prompt string. Langchain handles sending your prompt to the model and returning the generated text.
๐Ÿ“

Syntax

Using an LLM in Langchain involves these steps:

  • Import the LLM class you want (e.g., OpenAI).
  • Create an instance with your API key and optional settings.
  • Call the instance with a prompt string to get the generated text.

This pattern lets you easily connect to different language models.

python
from langchain.llms import OpenAI

llm = OpenAI(openai_api_key="your_api_key")

response = llm("Hello, how are you?")
print(response)
๐Ÿ’ป

Example

This example shows how to use the OpenAI LLM in Langchain to generate a simple greeting response.

python
from langchain.llms import OpenAI

# Create an LLM instance with your OpenAI API key
llm = OpenAI(openai_api_key="your_api_key")

# Call the LLM with a prompt
response = llm("Write a friendly greeting message.")

# Print the generated text
print(response)
Output
Hello! I hope you're having a wonderful day. How can I assist you today?
โš ๏ธ

Common Pitfalls

Common mistakes when using LLM in Langchain include:

  • Not setting the API key correctly, causing authentication errors.
  • Passing non-string inputs as prompts, which leads to errors.
  • Forgetting to install required packages or environment variables.
  • Expecting immediate output without handling asynchronous or network delays.

Always check your API key and input types before calling the LLM.

python
from langchain.llms import OpenAI

# Wrong: missing API key
# llm = OpenAI()

# Right: provide API key
llm = OpenAI(openai_api_key="your_api_key")

# Wrong: passing non-string prompt
# response = llm(12345)  # This will cause an error

# Right: pass string prompt
response = llm("Tell me a joke.")
print(response)
๐Ÿ“Š

Quick Reference

Remember these tips when using LLM in Langchain:

  • Always import the correct LLM class for your provider.
  • Set your API key securely before creating the LLM instance.
  • Pass prompts as strings to get valid responses.
  • Use print() to see the output text.
  • Handle exceptions for network or API errors gracefully.
โœ…

Key Takeaways

Import and instantiate the LLM class with your API key before use.
Always pass prompt text as a string to the LLM instance.
Check your API key and environment setup to avoid authentication errors.
Use print to display the generated text from the LLM call.
Handle errors and network delays when calling the LLM.