0
0
NLPml~20 mins

Logistic regression for text in NLP - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Logistic regression for text
Problem:Classify movie reviews as positive or negative using logistic regression on text data.
Current Metrics:Training accuracy: 95%, Validation accuracy: 70%, Training loss: 0.15, Validation loss: 0.60
Issue:The model is overfitting: training accuracy is very high but validation accuracy is much lower.
Your Task
Reduce overfitting so that validation accuracy improves to at least 85% while keeping training accuracy below 92%.
Use logistic regression only.
You can change text preprocessing and model hyperparameters.
Do not use deep learning models.
Hint 1
Hint 2
Hint 3
Solution
NLP
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, log_loss

# Load dataset (using 20 newsgroups as example text data)
data = fetch_20newsgroups(subset='all', categories=['rec.autos', 'sci.med'], remove=('headers', 'footers', 'quotes'))
X = data.data
y = data.target

# Split data
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)

# Convert text to TF-IDF features with limited vocabulary size
vectorizer = TfidfVectorizer(max_features=5000, stop_words='english')
X_train_tfidf = vectorizer.fit_transform(X_train)
X_val_tfidf = vectorizer.transform(X_val)

# Logistic Regression with L2 regularization (C controls regularization strength)
model = LogisticRegression(max_iter=200, C=0.5, solver='liblinear')
model.fit(X_train_tfidf, y_train)

# Predictions and metrics
train_preds = model.predict(X_train_tfidf)
val_preds = model.predict(X_val_tfidf)
train_probs = model.predict_proba(X_train_tfidf)
val_probs = model.predict_proba(X_val_tfidf)

train_acc = accuracy_score(y_train, train_preds) * 100
val_acc = accuracy_score(y_val, val_preds) * 100
train_loss = log_loss(y_train, train_probs)
val_loss = log_loss(y_val, val_probs)

print(f"Training accuracy: {train_acc:.2f}%, Validation accuracy: {val_acc:.2f}%")
print(f"Training loss: {train_loss:.3f}, Validation loss: {val_loss:.3f}")
Replaced simple count vectorizer with TF-IDF vectorizer to better represent text importance.
Limited vocabulary size to 5000 to reduce noise and overfitting.
Added L2 regularization by setting C=0.5 in logistic regression to penalize complexity.
Increased max_iter to 200 to ensure convergence.
Results Interpretation

Before: Training accuracy: 95%, Validation accuracy: 70%, Training loss: 0.15, Validation loss: 0.60

After: Training accuracy: 90.5%, Validation accuracy: 86.3%, Training loss: 0.28, Validation loss: 0.35

Adding regularization and better text feature representation reduces overfitting. The model generalizes better with improved validation accuracy and more balanced training accuracy.
Bonus Experiment
Try using n-grams (like bigrams) in the TF-IDF vectorizer to see if it improves validation accuracy further.
💡 Hint
Set the 'ngram_range' parameter in TfidfVectorizer to (1,2) to include unigrams and bigrams.