Complete the code to perform batch prediction on a dataset using a trained model.
predictions = model.[1](X_batch)fit instead of predict.transform which is for feature transformations.The predict method is used to generate predictions from the model on a batch of input data.
Complete the code to serve a real-time prediction request using a deployed model.
def serve_request(input_data): prediction = model.[1](input_data) return prediction
fit or train instead of predict.evaluate which is for testing model performance.For real-time serving, the model uses predict to generate a prediction for the incoming single input.
Fix the error in the batch prediction code by completing the blank.
batch_predictions = model.predict([1])y_batch to predict.fit instead.The input to predict should be the batch features X_batch, not labels or other variables.
Fill both blanks to create a dictionary comprehension that maps input IDs to their batch predictions.
id_to_prediction = {id: [1] for id, [2] in zip(ids, batch_data)}model.predict([batch_data]) which passes a list of the whole batch.The dictionary comprehension maps each id to the prediction from the model on the corresponding features.
Fill all three blanks to complete the real-time serving function that preprocesses input, predicts, and returns the result.
def serve(input_raw): processed = [1](input_raw) prediction = model.[2](processed) return [3]
transform instead of predict for model output.The function preprocesses the raw input, predicts using the model, and returns the prediction.