0
0
FastAPIframework~8 mins

Custom error response models in FastAPI - Performance & Optimization

Choose your learning style9 modes available
Performance: Custom error response models
MEDIUM IMPACT
This affects the server response time and payload size, impacting how quickly error information is delivered and rendered in the client.
Returning error responses in an API
FastAPI
from fastapi import FastAPI
from fastapi.responses import JSONResponse
from pydantic import BaseModel

app = FastAPI()

class ErrorResponse(BaseModel):
    error_code: int
    message: str

@app.get("/items/{item_id}")
async def read_item(item_id: int):
    if item_id == 0:
        return JSONResponse(status_code=404, content=ErrorResponse(error_code=404, message="Item not found").dict())
    return {"item_id": item_id}
Defines a structured error model that serializes consistently, enabling clients to handle errors efficiently and predictably.
📈 Performance GainSaves client parsing time and improves UX; server serialization cost is slightly higher but negligible for typical use.
Returning error responses in an API
FastAPI
from fastapi import FastAPI, HTTPException

app = FastAPI()

@app.get("/items/{item_id}")
async def read_item(item_id: int):
    if item_id == 0:
        raise HTTPException(status_code=404, detail="Item not found")
    return {"item_id": item_id}
Using default HTTPException with plain detail string sends minimal error info but lacks structured data, causing clients to parse unstructured text and possibly trigger extra processing.
📉 Performance CostMinimal payload size but may cause extra client-side parsing; no significant server serialization cost.
Performance Comparison
PatternDOM OperationsReflowsPaint CostVerdict
Default HTTPException with string detailN/A (API response only)N/AMinimal paint cost on client[!] OK
Custom error response model with PydanticN/A (API response only)N/ASlightly higher paint cost due to larger payload[OK] Good
Rendering Pipeline
When a custom error response model is used, the server serializes the model to JSON, which adds CPU time before sending the response. The client then parses this structured JSON, which can be faster than parsing unstructured text. This affects the time until the error message is displayed (LCP).
Server Serialization
Network Transfer
Client Parsing
Render
⚠️ BottleneckServer Serialization and Network Transfer due to increased payload size
Core Web Vital Affected
LCP
This affects the server response time and payload size, impacting how quickly error information is delivered and rendered in the client.
Optimization Tips
1Keep custom error response models minimal to reduce serialization and payload size.
2Use structured error models to improve client parsing and user experience.
3Check network payload size and response time to monitor error response performance.
Performance Quiz - 3 Questions
Test your performance knowledge
How does using a custom error response model affect the Largest Contentful Paint (LCP)?
AIt can slightly increase LCP due to larger payload and serialization time.
BIt drastically reduces LCP by skipping serialization.
CIt has no effect on LCP because errors are not rendered.
DIt causes layout shifts affecting CLS, not LCP.
DevTools: Network
How to check: Open DevTools, go to Network tab, trigger the error response, and inspect the response payload size and timing.
What to look for: Look for response size and time to first byte; smaller payloads and faster responses improve LCP.