0
0
LangchainHow-ToBeginner ยท 3 min read

How to Use Ollama with Langchain: Simple Integration Guide

To use Ollama with Langchain, import the Ollama class from langchain.llms and create an instance with your model name. Then, call the __call__ method with your prompt to get the model's response.
๐Ÿ“

Syntax

The basic syntax to use Ollama with Langchain involves importing the Ollama class, initializing it with the model name, and then calling it with a prompt string.

  • Ollama(model='model_name'): Creates an Ollama language model instance using the specified model.
  • llm(prompt): Sends the prompt to the model and returns the generated text.
python
from langchain.llms import Ollama

llm = Ollama(model="llama2")
response = llm("Hello, how are you?")
print(response)
๐Ÿ’ป

Example

This example shows how to create an Ollama instance with the "llama2" model and generate a response for a simple greeting prompt.

python
from langchain.llms import Ollama

# Initialize Ollama with the llama2 model
llm = Ollama(model="llama2")

# Generate a response from the model
response = llm("Hello, how are you?")

print(response)
Output
I'm doing great, thank you! How can I assist you today?
โš ๏ธ

Common Pitfalls

Common mistakes when using Ollama with Langchain include:

  • Not specifying the correct model name in the Ollama constructor.
  • Forgetting to install or run the Ollama local server or CLI, which is required for the model to respond.
  • Passing non-string types as prompts, which causes errors.
  • Expecting asynchronous behavior when Ollama calls are synchronous by default.

Always ensure the Ollama environment is set up and the model name matches an available Ollama model.

python
from langchain.llms import Ollama

# Wrong: missing model name
# llm = Ollama()

# Right: specify model
llm = Ollama(model="llama2")

# Wrong: passing non-string prompt
# response = llm(12345)  # This will raise an error

# Right: pass string prompt
response = llm("Tell me a joke.")
print(response)
๐Ÿ“Š

Quick Reference

Tips for using Ollama with Langchain:

  • Always specify the model parameter when creating the Ollama instance.
  • Ensure Ollama CLI or server is installed and running locally.
  • Use string prompts only.
  • Use llm(prompt) to get the model's text response.
  • Combine Ollama with Langchain chains for complex workflows.
โœ…

Key Takeaways

Initialize Ollama with the correct model name before calling it.
Ensure the Ollama environment is installed and running locally.
Pass only string prompts to the Ollama instance.
Use the Ollama instance like a function to get model responses.
Combine Ollama with Langchain chains for advanced language workflows.