Complete the code to perform batch inference on a list of texts using a model.
predictions = model.predict([1])The model expects a list of texts for batch inference, so we pass texts.
Complete the code to perform real-time inference on a single input text.
result = model.predict([1])Real-time inference processes one input at a time, so we pass text.
Fix the error in the batch inference code by choosing the correct input format.
outputs = model.predict([1])Batch inference requires a list of texts, so texts is correct.
Fill both blanks to create a dictionary comprehension that maps each text to its prediction in batch mode.
results = {text: model.predict([1]) for text in [2]text directly without wrapping causes errors.For each text in texts, we pass a list containing that single text [text] to model.predict because the model expects a list input.
Fill all three blanks to create a batch inference dictionary that filters texts longer than 5 words and maps them to predictions.
filtered_results = { [1]: model.predict([[2]]) for [3] in texts if len([1].split()) > 5 }We use text as the variable name consistently. The comprehension iterates over texts, filters texts longer than 5 words, and predicts on each wrapped in a list.