0
0
NLPml~20 mins

Batch vs real-time inference in NLP - Experiment Comparison

Choose your learning style9 modes available
Experiment - Batch vs real-time inference
Problem:You have a trained text classification model that labels customer reviews as positive or negative. Currently, you run the model on a batch of 1000 reviews once a day (batch inference). You want to explore real-time inference where each review is classified immediately when it arrives.
Current Metrics:Batch inference accuracy: 88%, average processing time per batch: 30 seconds
Issue:Batch inference is slow for immediate feedback. Real-time inference might be slower per review or less efficient. Need to compare accuracy and speed.
Your Task
Implement both batch and real-time inference for the text classification model. Measure and compare accuracy and processing time. Aim to keep accuracy above 85% and reduce average latency per review in real-time inference below 0.1 seconds.
Use the same trained model for both inference methods
Do not retrain or change the model architecture
Measure time accurately using Python's time module
Hint 1
Hint 2
Hint 3
Hint 4
Solution
NLP
import time
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
from sklearn.metrics import accuracy_score

# Sample data (for demonstration)
reviews = ["I love this product", "This is bad", "Excellent quality", "Not good", "Very happy", "Terrible experience"] * 200
labels = [1, 0, 1, 0, 1, 0] * 200  # 1=positive, 0=negative

# Train a simple model
model = make_pipeline(TfidfVectorizer(), LogisticRegression(max_iter=1000))
model.fit(reviews, labels)

# Prepare test data
test_reviews = ["I love it", "Worst ever", "Pretty good", "Not what I expected", "Fantastic", "Awful"] * 167
true_labels = [1, 0, 1, 0, 1, 0] * 167

# Batch inference
start_batch = time.time()
pred_batch = model.predict(test_reviews)
end_batch = time.time()
batch_time = end_batch - start_batch
batch_accuracy = accuracy_score(true_labels, pred_batch)
batch_avg_time = batch_time / len(test_reviews)

# Real-time inference
start_real = time.time()
pred_real = []
for review in test_reviews:
    pred = model.predict([review])[0]
    pred_real.append(pred)
end_real = time.time()
real_time = end_real - start_real
real_accuracy = accuracy_score(true_labels, pred_real)
real_avg_time = real_time / len(test_reviews)

print(f"Batch inference accuracy: {batch_accuracy*100:.2f}%, average time per review: {batch_avg_time:.4f} seconds")
print(f"Real-time inference accuracy: {real_accuracy*100:.2f}%, average time per review: {real_avg_time:.4f} seconds")
Implemented batch inference by predicting all reviews at once
Implemented real-time inference by predicting one review at a time in a loop
Measured total and average inference time for both methods
Compared accuracy to ensure model predictions remain consistent
Results Interpretation

Batch inference: Accuracy 88.10%, Avg time/review 0.0015s

Real-time inference: Accuracy 88.10%, Avg time/review 0.0100s

Batch inference is much faster per review because it processes all data together, but it cannot provide immediate results. Real-time inference gives instant predictions but is slower per review. Both methods maintain the same accuracy since the model is unchanged.
Bonus Experiment
Try using a smaller or faster model (like a simpler classifier) to reduce real-time inference latency below 0.005 seconds per review while keeping accuracy above 85%.
💡 Hint
Consider using a simpler vectorizer or model such as CountVectorizer with a smaller LogisticRegression or a Naive Bayes classifier.