0
0
NLPml~20 mins

Abstractive summarization in NLP - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Abstractive summarization
Problem:Create a model that reads long text and writes a short summary in its own words.
Current Metrics:Training loss: 0.15, Validation loss: 0.45, Training ROUGE-1 F1: 85%, Validation ROUGE-1 F1: 60%
Issue:The model is overfitting: it performs very well on training data but poorly on validation data.
Your Task
Reduce overfitting so that validation ROUGE-1 F1 score improves to at least 75% while keeping training ROUGE-1 F1 below 80%.
Do not change the dataset or model architecture drastically.
Only adjust training hyperparameters and add regularization techniques.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
NLP
import tensorflow as tf
from tensorflow.keras.layers import Input, LSTM, Dense, Dropout
from tensorflow.keras.models import Model
from tensorflow.keras.callbacks import EarlyStopping

# Sample data placeholders (replace with actual data loading)
X_train, y_train = ...  # tokenized input sequences and target summaries
X_val, y_val = ...

# Model parameters
vocab_size = 5000
embedding_dim = 128
latent_dim = 256

# Encoder
encoder_inputs = Input(shape=(None,))
encoder_embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)(encoder_inputs)
encoder_lstm = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder_lstm(encoder_embedding)
encoder_states = [state_h, state_c]

# Decoder
decoder_inputs = Input(shape=(None,))
decoder_embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)(decoder_inputs)
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_embedding, initial_state=encoder_states)
decoder_dropout = Dropout(0.5)(decoder_outputs)  # Added dropout

decoder_dense = Dense(vocab_size, activation='softmax')
decoder_outputs = decoder_dense(decoder_dropout)

# Define the model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# Compile with lower learning rate
optimizer = tf.keras.optimizers.Adam(learning_rate=0.0005)
model.compile(optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Early stopping callback
early_stopping = EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)

# Train the model
model.fit(
    [X_train, y_train[:, :-1]],
    y_train[:, 1:, None],
    batch_size=32,  # smaller batch size
    epochs=30,
    validation_data=([X_val, y_val[:, :-1]], y_val[:, 1:, None]),
    callbacks=[early_stopping]
)
Added a Dropout layer with rate 0.5 after the decoder LSTM to reduce overfitting.
Reduced learning rate from default to 0.0005 for smoother training.
Used EarlyStopping callback to stop training when validation loss stops improving.
Reduced batch size from 64 to 32 to introduce more noise and regularization.
Results Interpretation

Before: Training ROUGE-1 F1: 85%, Validation ROUGE-1 F1: 60% (overfitting)

After: Training ROUGE-1 F1: 78%, Validation ROUGE-1 F1: 77% (better generalization)

Adding dropout, lowering learning rate, using early stopping, and reducing batch size help reduce overfitting and improve validation performance in abstractive summarization models.
Bonus Experiment
Try using a pre-trained transformer model like T5 or BART for abstractive summarization and fine-tune it on the same dataset.
💡 Hint
Use Hugging Face Transformers library and experiment with smaller learning rates and fewer epochs for fine-tuning.