0
0
MLOpsdevops~20 mins

Batch prediction vs real-time serving in MLOps - Practice Questions

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Batch vs Real-Time Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Key difference between batch prediction and real-time serving

Which statement best describes the main difference between batch prediction and real-time serving in machine learning?

ABatch prediction processes many data points at once periodically, while real-time serving predicts instantly for individual requests.
BReal-time serving only works with small datasets, batch prediction only works with large datasets.
CBatch prediction is always more accurate than real-time serving because it uses more data.
DBatch prediction requires manual input for each prediction, real-time serving automates all predictions.
Attempts:
2 left
💡 Hint

Think about how often predictions are made and how many data points are handled at once.

Model Choice
intermediate
2:00remaining
Choosing prediction method for fraud detection

You are building a fraud detection system that must flag suspicious transactions immediately as they happen. Which prediction method is best?

ABatch prediction, running once a day on all transactions.
BReal-time serving, predicting only after collecting 100 transactions.
CBatch prediction, running once a week on all transactions.
DReal-time serving, predicting on each transaction as it occurs.
Attempts:
2 left
💡 Hint

Consider how fast the system needs to respond to new data.

Metrics
advanced
2:00remaining
Evaluating latency in batch vs real-time prediction

You measure the average latency (time to get prediction) for batch prediction and real-time serving. Which is true?

ABatch prediction has higher latency per data point than real-time serving.
BBoth have the same latency per data point.
CReal-time serving has higher latency per data point than batch prediction.
DLatency cannot be measured for batch prediction.
Attempts:
2 left
💡 Hint

Think about how predictions are processed individually or in groups.

🔧 Debug
advanced
2:00remaining
Identifying issue in real-time serving system

A real-time serving system is suddenly slow and sometimes fails to respond. Which is the most likely cause?

AThe model used is too simple and cannot handle the data volume.
BThe real-time serving system is overloaded with too many simultaneous requests.
CThe batch prediction job is running too frequently and blocking resources.
DThe data used for batch prediction is outdated.
Attempts:
2 left
💡 Hint

Consider what happens when many users request predictions at once.

Predict Output
expert
2:00remaining
Output of batch vs real-time prediction code snippet

Given the code below, what is the output?

MLOps
def batch_predict(model, data):
    return [model(x) for x in data]

def real_time_predict(model, x):
    return model(x)

model = lambda x: x * 2
batch_data = [1, 2, 3]

batch_result = batch_predict(model, batch_data)
real_time_result = real_time_predict(model, 4)

print(batch_result, real_time_result)
A[2, 4, 6] 8
B[1, 2, 3] 4
C[2, 4, 6] [8]
DError: model is not callable
Attempts:
2 left
💡 Hint

Look at how the model function is applied to data in batch and real-time.