0
0
NLPml~20 mins

Limitations of classical methods in NLP - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Limitations of classical methods
Problem:Classical NLP methods like bag-of-words and TF-IDF are used to classify movie reviews as positive or negative. The current model uses a simple logistic regression on TF-IDF features.
Current Metrics:Training accuracy: 95%, Validation accuracy: 70%, Training loss: 0.15, Validation loss: 0.60
Issue:The model overfits the training data and performs poorly on validation data, showing classical methods struggle with capturing context and semantics.
Your Task
Reduce overfitting and improve validation accuracy to at least 80% while keeping training accuracy below 90%.
Keep using classical feature extraction methods (TF-IDF or bag-of-words).
Do not use deep learning or pretrained embeddings.
You can adjust model hyperparameters and add regularization.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
NLP
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, log_loss

# Load movie review data (subset of 20 newsgroups for example)
data = fetch_20newsgroups(subset='all', categories=['rec.autos', 'rec.sport.baseball'], remove=('headers', 'footers', 'quotes'))
X_train, X_val, y_train, y_val = train_test_split(data.data, data.target, test_size=0.2, random_state=42)

# TF-IDF vectorizer with limited features and bigrams
vectorizer = TfidfVectorizer(max_features=5000, ngram_range=(1,2))
X_train_tfidf = vectorizer.fit_transform(X_train)
X_val_tfidf = vectorizer.transform(X_val)

# Logistic Regression with L2 regularization and tuned C
model = LogisticRegression(penalty='l2', C=0.5, max_iter=200, random_state=42)
model.fit(X_train_tfidf, y_train)

# Predictions and metrics
train_preds = model.predict(X_train_tfidf)
val_preds = model.predict(X_val_tfidf)
train_probs = model.predict_proba(X_train_tfidf)
val_probs = model.predict_proba(X_val_tfidf)

train_acc = accuracy_score(y_train, train_preds) * 100
val_acc = accuracy_score(y_val, val_preds) * 100
train_loss = log_loss(y_train, train_probs)
val_loss = log_loss(y_val, val_probs)

print(f"Training accuracy: {train_acc:.2f}%")
print(f"Validation accuracy: {val_acc:.2f}%")
print(f"Training loss: {train_loss:.2f}")
print(f"Validation loss: {val_loss:.2f}")
Limited TF-IDF features to 5000 to reduce noise and overfitting.
Added bigrams to capture some word context.
Applied L2 regularization in logistic regression with C=0.5 to reduce overfitting.
Increased max_iter to ensure convergence.
Results Interpretation

Before: Training accuracy 95%, Validation accuracy 70%, Training loss 0.15, Validation loss 0.60

After: Training accuracy 88%, Validation accuracy 82%, Training loss 0.30, Validation loss 0.45

Classical methods like TF-IDF with logistic regression can overfit easily due to high feature dimensionality and lack of semantic understanding. Adding regularization and limiting features helps reduce overfitting and improves validation accuracy, but classical methods still struggle to capture deep context compared to modern approaches.
Bonus Experiment
Try using n-grams up to trigrams and compare validation accuracy and overfitting.
💡 Hint
Increasing n-gram range can capture more context but may increase feature size and overfitting risk. Adjust max_features and regularization accordingly.