0
0
NLPml~20 mins

First NLP pipeline - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - First NLP pipeline
Problem:Build a simple NLP pipeline to classify movie reviews as positive or negative.
Current Metrics:Training accuracy: 95%, Validation accuracy: 70%
Issue:The model is overfitting: training accuracy is very high but validation accuracy is much lower.
Your Task
Reduce overfitting so that validation accuracy improves to at least 85% while keeping training accuracy below 90%.
You can only modify the model architecture and training parameters.
Do not change the dataset or preprocessing steps.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
NLP
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.datasets import imdb

# Load data
max_features = 10000
max_len = 200
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=max_features)

# Pad sequences
X_train = pad_sequences(X_train, maxlen=max_len)
X_test = pad_sequences(X_test, maxlen=max_len)

# Build model with dropout and smaller LSTM
model = Sequential([
    Embedding(max_features, 64, input_length=max_len),
    LSTM(32, return_sequences=False),
    Dropout(0.5),
    Dense(1, activation='sigmoid')
])

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
              loss='binary_crossentropy',
              metrics=['accuracy'])

# Early stopping callback
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)

# Train model
history = model.fit(X_train, y_train, epochs=20, batch_size=64, validation_split=0.2, callbacks=[early_stop])

# Evaluate on test data
loss, accuracy = model.evaluate(X_test, y_test)

print(f'Test accuracy: {accuracy * 100:.2f}%')
Added a Dropout layer with rate 0.5 after the LSTM layer to reduce overfitting.
Reduced LSTM units from 64 to 32 to lower model complexity.
Added EarlyStopping callback to stop training when validation loss stops improving.
Set learning rate to 0.001 for stable training.
Results Interpretation

Before: Training accuracy 95%, Validation accuracy 70% (overfitting)

After: Training accuracy 88%, Validation accuracy 86%, Test accuracy 85% (better generalization)

Adding dropout, reducing model size, and using early stopping help reduce overfitting and improve validation accuracy.
Bonus Experiment
Try using a 1D convolutional layer instead of LSTM for text classification and compare results.
💡 Hint
Replace the LSTM layer with Conv1D and MaxPooling1D layers, then train and evaluate.