0
0
Prompt Engineering / GenAIml~20 mins

API-based deployment in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
API Deployment Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding API-based deployment benefits

Which of the following is the primary advantage of deploying a machine learning model via an API?

AIt eliminates the need for any server or cloud infrastructure.
BIt automatically improves the model's accuracy over time without retraining.
CIt allows real-time access to the model from different applications without sharing the model code.
DIt stores the entire training dataset within the API for faster predictions.
Attempts:
2 left
💡 Hint

Think about how APIs let different programs talk to each other.

Predict Output
intermediate
2:00remaining
Output of a simple API call to a deployed model

Given the following Python code snippet calling a deployed ML model API, what is the printed output?

Prompt Engineering / GenAI
import requests
response = requests.post('https://api.example.com/predict', json={'input': [1, 2, 3]})
print(response.json())
A{"prediction": [0.1, 0.5, 0.4]}
B{'error': 'Invalid input format'}
C{'prediction': [1, 0, 1]}
Drequests.exceptions.ConnectionError
Attempts:
2 left
💡 Hint

Assume the API is working correctly and returns prediction probabilities.

Hyperparameter
advanced
2:00remaining
Choosing timeout settings for API deployment

When deploying a machine learning model via an API, which timeout setting is most important to ensure good user experience without overloading the server?

ASet timeout to zero to prioritize requests in the order they arrive.
BSet a very high timeout so the server never cancels requests, even if slow.
CDisable timeout settings to allow unlimited request processing time.
DSet a low timeout to quickly reject requests that take too long, preventing server overload.
Attempts:
2 left
💡 Hint

Think about balancing responsiveness and server health.

Metrics
advanced
2:00remaining
Evaluating API latency metrics

You deployed a model via API and collected these latency times (in milliseconds) for 5 requests: [120, 150, 130, 200, 170]. What is the average latency?

A154 ms
B170 ms
C150 ms
D1540 ms
Attempts:
2 left
💡 Hint

Calculate the sum of all latencies and divide by the number of requests.

🔧 Debug
expert
3:00remaining
Identifying cause of API deployment failure

After deploying your ML model as an API, clients report a 500 Internal Server Error when sending valid requests. Which of the following is the most likely cause?

AThe client is sending requests with wrong input data types.
BThe model file path is incorrect or missing on the server.
CThe API endpoint URL is misspelled in the client code.
DThe server has no internet connection.
Attempts:
2 left
💡 Hint

500 errors usually mean server-side problems, not client mistakes.