FastAPI helps you quickly share your machine learning model so others can use it easily through the internet.
FastAPI for model serving in ML Python
from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class InputData(BaseModel): feature1: float feature2: float @app.post('/predict') async def predict(data: InputData): # Use your model here prediction = data.feature1 + data.feature2 return {'prediction': prediction}
Use @app.post to create a POST endpoint where you send data.
Define input data with BaseModel to check data types automatically.
from fastapi import FastAPI app = FastAPI() @app.get('/') async def root(): return {'message': 'Hello World'}
from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class InputData(BaseModel): age: int income: float @app.post('/predict') async def predict(data: InputData): score = data.age * 0.1 + data.income * 0.01 return {'score': score}
This program creates a FastAPI app with one POST endpoint '/predict'. It takes two numbers, adds them, and returns the result as prediction.
Run this code, then send a POST request with JSON like {"feature1": 3.5, "feature2": 2.5} to get prediction 6.0.
from fastapi import FastAPI from pydantic import BaseModel import uvicorn app = FastAPI() class InputData(BaseModel): feature1: float feature2: float @app.post('/predict') async def predict(data: InputData): # Simple model: sum of features prediction = data.feature1 + data.feature2 return {'prediction': prediction} if __name__ == '__main__': uvicorn.run(app, host='127.0.0.1', port=8000)
Use tools like curl or Postman to send POST requests to your FastAPI server.
FastAPI automatically creates interactive docs at /docs to test your API.
Keep your model loading outside the endpoint function to avoid reloading on every request.
FastAPI makes it easy to share your ML model as a web service.
Define input data clearly with Pydantic models for safety and clarity.
Use POST endpoints to send data and get predictions back in JSON format.