0
0
ML Pythonml~20 mins

FastAPI for model serving in ML Python - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
FastAPI Model Serving Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
What is the output of this FastAPI endpoint code?

Consider this FastAPI code snippet that serves a simple model prediction:

from fastapi import FastAPI
app = FastAPI()

@app.get('/predict')
async def predict(x: int):
    return {'prediction': x * 2}

What will be the JSON response when a client requests /predict?x=3?

ML Python
from fastapi import FastAPI
app = FastAPI()

@app.get('/predict')
async def predict(x: int):
    return {'prediction': x * 2}
A{"prediction": 6}
B{"prediction": "6"}
C{"prediction": null}
D{"prediction": 3}
Attempts:
2 left
💡 Hint

Remember that the endpoint multiplies the input x by 2 and returns it as an integer in JSON.

Model Choice
intermediate
2:00remaining
Which model loading method is best for FastAPI startup?

You want to load a machine learning model once when the FastAPI app starts, so it can be reused for all requests efficiently.

Which method below correctly loads the model at startup?

ALoad the model inside the endpoint function so it reloads on every request.
BLoad the model globally outside any function, so it loads once when the app starts.
CLoad the model inside a dependency function with <code>Depends</code> that runs per request.
DLoad the model inside a background task that runs after each request.
Attempts:
2 left
💡 Hint

Think about how to avoid reloading the model multiple times to save time.

Hyperparameter
advanced
2:00remaining
Which FastAPI setting improves concurrency for model serving?

You want to serve a machine learning model with FastAPI and handle many requests at the same time efficiently.

Which Uvicorn server option helps improve concurrency?

AUse <code>--workers 4</code> to run multiple worker processes.
BUse <code>--log-level critical</code> to reduce logging overhead.
CUse <code>--reload</code> to automatically reload on code changes.
DUse <code>--workers 1</code> to keep a single process for simplicity.
Attempts:
2 left
💡 Hint

More workers mean more processes to handle requests concurrently.

🔧 Debug
advanced
2:00remaining
Why does this FastAPI model serving code raise an error?

Examine this FastAPI code snippet:

from fastapi import FastAPI
import pickle

app = FastAPI()

model = pickle.load(open('model.pkl', 'rb'))

@app.post('/predict')
async def predict(data: dict):
    features = data['features']
    prediction = model.predict([features])
    return {'prediction': prediction}

When sending a POST request with JSON {"features": [1, 2, 3]}, it raises a TypeError. Why?

ML Python
from fastapi import FastAPI
import pickle

app = FastAPI()

model = pickle.load(open('model.pkl', 'rb'))

@app.post('/predict')
async def predict(data: dict):
    features = data['features']
    prediction = model.predict([features])
    return {'prediction': prediction}
AFastAPI cannot accept POST requests with JSON bodies.
BThe 'features' key is missing in the input JSON.
Cpickle.load cannot load the model outside a function.
DThe model.predict returns a numpy array which is not JSON serializable.
Attempts:
2 left
💡 Hint

Think about what type model.predict returns and how FastAPI converts it to JSON.

🧠 Conceptual
expert
2:00remaining
What is the main advantage of using FastAPI for model serving over Flask?

Choose the best reason why FastAPI is often preferred for serving machine learning models compared to Flask.

AFastAPI requires less code to write HTML templates for web pages.
BFlask does not support JSON responses by default.
CFastAPI automatically generates interactive API docs and supports async code for better performance.
DFlask cannot be deployed on cloud platforms.
Attempts:
2 left
💡 Hint

Think about developer experience and performance features.