How to Measure Request Time in FastAPI: Simple Middleware Example
To measure request time in
FastAPI, create a middleware that records the start time before processing the request and calculates the elapsed time after the response. Use Python's time.perf_counter() for precise timing inside the middleware function.Syntax
Use FastAPI's @app.middleware("http") decorator to define a middleware function. Inside, capture the start time, call call_next(request) to process the request, then calculate the elapsed time after the response is ready.
This pattern lets you measure how long each HTTP request takes.
python
from fastapi import FastAPI, Request import time app = FastAPI() @app.middleware("http") async def measure_request_time(request: Request, call_next): start_time = time.perf_counter() response = await call_next(request) process_time = time.perf_counter() - start_time response.headers["X-Process-Time"] = str(process_time) return response
Example
This example shows a complete FastAPI app that measures request time for each call and adds it as a custom header X-Process-Time in the response.
You can test it by running the app and sending requests; the response headers will show the time taken.
python
from fastapi import FastAPI, Request import time app = FastAPI() @app.middleware("http") async def measure_request_time(request: Request, call_next): start_time = time.perf_counter() response = await call_next(request) process_time = time.perf_counter() - start_time response.headers["X-Process-Time"] = str(process_time) return response @app.get("/") async def root(): return {"message": "Hello, FastAPI!"}
Output
HTTP/1.1 200 OK
content-length: 25
content-type: application/json
x-process-time: 0.000123456
{"message":"Hello, FastAPI!"}
Common Pitfalls
- Using
time.time()instead oftime.perf_counter()can give less precise timing. - Not awaiting
call_next(request)will break the async flow and cause errors. - For heavy or blocking operations, the measured time includes all processing, so consider async-friendly code.
- Adding timing info only in logs misses the chance to expose it in response headers for clients.
python
from fastapi import FastAPI, Request import time app = FastAPI() # Wrong: missing await @app.middleware("http") async def wrong_middleware(request: Request, call_next): start = time.perf_counter() response = call_next(request) # Missing await causes error duration = time.perf_counter() - start response.headers["X-Time"] = str(duration) return response # Correct version @app.middleware("http") async def correct_middleware(request: Request, call_next): start = time.perf_counter() response = await call_next(request) duration = time.perf_counter() - start response.headers["X-Time"] = str(duration) return response
Quick Reference
- Middleware decorator:
@app.middleware("http") - Start time:
start_time = time.perf_counter() - Process request:
response = await call_next(request) - Calculate duration:
duration = time.perf_counter() - start_time - Add header:
response.headers["X-Process-Time"] = str(duration)
Key Takeaways
Use FastAPI middleware with @app.middleware("http") to measure request time.
Use time.perf_counter() for precise timing before and after request processing.
Always await call_next(request) to properly handle async requests.
Add the measured time to response headers to expose it to clients.
Avoid blocking code inside requests to keep timing accurate and responsive.