0
0
LangchainHow-ToBeginner ยท 4 min read

How to Use Langchain with FastAPI: Simple Integration Guide

To use langchain with fastapi, create a FastAPI app and define endpoints that call Langchain's language model chains asynchronously. Import Langchain components, initialize your chain (like an LLMChain), then use FastAPI routes to handle requests and return AI-generated responses.
๐Ÿ“

Syntax

This pattern shows how to set up a FastAPI app that uses Langchain's LLMChain to process input and return AI-generated text.

  • FastAPI(): Creates the web app.
  • LLMChain: Langchain's chain to run language models.
  • @app.post: Defines an API endpoint.
  • async def: Async function to handle requests.
  • await chain.arun(): Runs the chain asynchronously.
python
from fastapi import FastAPI
from pydantic import BaseModel
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

app = FastAPI()

class Query(BaseModel):
    question: str

# Define prompt template
prompt = PromptTemplate(
    input_variables=["question"],
    template="Answer this question: {question}"
)

# Initialize LLM
llm = OpenAI(temperature=0)

# Create chain
chain = LLMChain(llm=llm, prompt=prompt)

@app.post("/ask")
async def ask_question(query: Query):
    answer = await chain.arun(query.question)
    return {"answer": answer}
๐Ÿ’ป

Example

This example shows a complete FastAPI app using Langchain's OpenAI LLMChain to answer questions sent via POST requests to the /ask endpoint.

Run the app and send JSON like {"question": "What is Langchain?"} to get AI-generated answers.

python
from fastapi import FastAPI
from pydantic import BaseModel
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
import uvicorn

app = FastAPI()

class Query(BaseModel):
    question: str

prompt = PromptTemplate(
    input_variables=["question"],
    template="Answer this question: {question}"
)

llm = OpenAI(temperature=0)
chain = LLMChain(llm=llm, prompt=prompt)

@app.post("/ask")
async def ask_question(query: Query):
    answer = await chain.arun(query.question)
    return {"answer": answer}

if __name__ == "__main__":
    uvicorn.run(app, host="127.0.0.1", port=8000)
Output
INFO: Started server process [12345] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
โš ๏ธ

Common Pitfalls

1. Forgetting to use async/await: Langchain's arun method is async, so your FastAPI route must be async and use await. Otherwise, the call will fail or block.

2. Not initializing the LLM properly: Make sure your OpenAI API key is set in environment variables or config, or the LLM will not work.

3. Missing input validation: Use Pydantic models to validate incoming JSON requests to avoid errors.

python
## Wrong way (sync function, no await):
# @app.post("/ask")
# def ask_question(query: Query):
#     answer = chain.arun(query.question)  # Missing await
#     return {"answer": answer}

## Right way (async with await):
@app.post("/ask")
async def ask_question(query: Query):
    answer = await chain.arun(query.question)
    return {"answer": answer}
๐Ÿ“Š

Quick Reference

Remember these key points when using Langchain with FastAPI:

  • Use async def and await for Langchain calls.
  • Define input models with Pydantic for request validation.
  • Initialize Langchain chains before starting the FastAPI app.
  • Run FastAPI with uvicorn for async support.
โœ…

Key Takeaways

Use async FastAPI routes with await when calling Langchain's async methods.
Initialize Langchain chains and prompts before handling requests.
Validate input data with Pydantic models for safer API endpoints.
Set your OpenAI API key in environment variables for Langchain to work.
Run your FastAPI app with uvicorn to support async operations.