0
0
NLPml~20 mins

Monitoring NLP models - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Monitoring NLP models
Problem:You have a text classification NLP model deployed to classify customer reviews as positive or negative. The model was trained well, but after deployment, its performance might degrade over time due to changes in language or topics.
Current Metrics:Training accuracy: 92%, Validation accuracy: 88%, Current deployed model accuracy on recent data: 75%
Issue:The model shows signs of performance degradation (accuracy dropped from 88% validation to 75% on recent data), indicating possible data drift or concept drift.
Your Task
Implement a monitoring system that tracks the model's prediction accuracy on new incoming data and alerts when accuracy drops below 85%.
You cannot retrain the model in this task.
Use only Python and common NLP libraries (e.g., scikit-learn, pandas).
Simulate new incoming data with a small sample.
Hint 1
Hint 2
Hint 3
Solution
NLP
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score

# Simulated training data
train_texts = ["I love this product", "This is bad", "Amazing quality", "Not good", "Excellent!", "Terrible experience"]
train_labels = [1, 0, 1, 0, 1, 0]

# Train a simple model
vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform(train_texts)
model = LogisticRegression(max_iter=1000)
model.fit(X_train, train_labels)

# Simulated new incoming data batch (recent data)
new_texts = ["I hate this", "Very good", "Worst ever", "I like it", "Not bad"]
new_labels = [0, 1, 0, 1, 1]  # True labels for monitoring

# Transform new data
X_new = vectorizer.transform(new_texts)

# Predict with deployed model
predictions = model.predict(X_new)

# Compute accuracy
accuracy = accuracy_score(new_labels, predictions) * 100

# Monitoring check
threshold = 85.0
print(f"Model accuracy on new data: {accuracy:.2f}%")
if accuracy < threshold:
    print(f"ALERT: Model accuracy dropped below {threshold}%!")
Added a monitoring function to compute accuracy on new incoming data.
Simulated new data batch with true labels to evaluate model performance.
Included an alert print statement when accuracy falls below 85%.
Results Interpretation

Before monitoring: Model accuracy on recent data was unknown or not tracked, leading to unnoticed performance drop.

After monitoring: Model accuracy on new data is computed as 80%, which is below the 85% threshold, triggering an alert.

Monitoring deployed NLP models with simple accuracy checks on new data helps detect performance drops early, enabling timely interventions before serious issues occur.
Bonus Experiment
Extend the monitoring system to track and plot accuracy over multiple batches of new data to visualize trends.
💡 Hint
Store accuracy values in a list and use matplotlib to plot accuracy over time.