How to Use Langchain with FastAPI: Simple Integration Guide
langchain with fastapi, create a FastAPI app and define endpoints that call Langchain's language model chains asynchronously. Import Langchain components, initialize your chain (like an LLMChain), then use FastAPI routes to handle requests and return AI-generated responses.Syntax
This pattern shows how to set up a FastAPI app that uses Langchain's LLMChain to process input and return AI-generated text.
FastAPI(): Creates the web app.LLMChain: Langchain's chain to run language models.@app.post: Defines an API endpoint.async def: Async function to handle requests.await chain.arun(): Runs the chain asynchronously.
from fastapi import FastAPI from pydantic import BaseModel from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate app = FastAPI() class Query(BaseModel): question: str # Define prompt template prompt = PromptTemplate( input_variables=["question"], template="Answer this question: {question}" ) # Initialize LLM llm = OpenAI(temperature=0) # Create chain chain = LLMChain(llm=llm, prompt=prompt) @app.post("/ask") async def ask_question(query: Query): answer = await chain.arun(query.question) return {"answer": answer}
Example
This example shows a complete FastAPI app using Langchain's OpenAI LLMChain to answer questions sent via POST requests to the /ask endpoint.
Run the app and send JSON like {"question": "What is Langchain?"} to get AI-generated answers.
from fastapi import FastAPI from pydantic import BaseModel from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate import uvicorn app = FastAPI() class Query(BaseModel): question: str prompt = PromptTemplate( input_variables=["question"], template="Answer this question: {question}" ) llm = OpenAI(temperature=0) chain = LLMChain(llm=llm, prompt=prompt) @app.post("/ask") async def ask_question(query: Query): answer = await chain.arun(query.question) return {"answer": answer} if __name__ == "__main__": uvicorn.run(app, host="127.0.0.1", port=8000)
Common Pitfalls
1. Forgetting to use async/await: Langchain's arun method is async, so your FastAPI route must be async and use await. Otherwise, the call will fail or block.
2. Not initializing the LLM properly: Make sure your OpenAI API key is set in environment variables or config, or the LLM will not work.
3. Missing input validation: Use Pydantic models to validate incoming JSON requests to avoid errors.
## Wrong way (sync function, no await): # @app.post("/ask") # def ask_question(query: Query): # answer = chain.arun(query.question) # Missing await # return {"answer": answer} ## Right way (async with await): @app.post("/ask") async def ask_question(query: Query): answer = await chain.arun(query.question) return {"answer": answer}
Quick Reference
Remember these key points when using Langchain with FastAPI:
- Use
async defandawaitfor Langchain calls. - Define input models with Pydantic for request validation.
- Initialize Langchain chains before starting the FastAPI app.
- Run FastAPI with
uvicornfor async support.