0
0
Prompt Engineering / GenAIml~20 mins

Question answering in Prompt Engineering / GenAI - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Question answering
Problem:Build a question answering model that reads a paragraph and answers questions about it.
Current Metrics:Training accuracy: 98%, Validation accuracy: 70%
Issue:The model is overfitting: it performs very well on training data but poorly on validation data.
Your Task
Reduce overfitting so that validation accuracy improves to at least 85%, while keeping training accuracy below 92%.
You can only change model architecture and training hyperparameters.
Do not change the dataset or data preprocessing steps.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Prompt Engineering / GenAI
import tensorflow as tf
from tensorflow.keras.layers import Input, Embedding, LSTM, Dense, Dropout
from tensorflow.keras.models import Model
from tensorflow.keras.callbacks import EarlyStopping

# Sample data placeholders (replace with actual data loading)
X_train = tf.random.uniform((1000, 100), maxval=20000, dtype=tf.int32)
y_train = tf.random.uniform((1000, 1), maxval=2, dtype=tf.int32)
X_val = tf.random.uniform((200, 100), maxval=20000, dtype=tf.int32)
y_val = tf.random.uniform((200, 1), maxval=2, dtype=tf.int32)

vocab_size = 20000
embedding_dim = 64
max_len = 100

inputs = Input(shape=(max_len,))
embedding = Embedding(vocab_size, embedding_dim)(inputs)
lstm = LSTM(64, return_sequences=False)(embedding)
drop = Dropout(0.5)(lstm)
outputs = Dense(1, activation='sigmoid')(drop)

model = Model(inputs, outputs)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
              loss='binary_crossentropy',
              metrics=['accuracy'])

early_stop = EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)

history = model.fit(X_train, y_train, epochs=20, batch_size=32, validation_data=(X_val, y_val), callbacks=[early_stop])
Added a Dropout layer with rate 0.5 after the LSTM layer to reduce overfitting.
Reduced LSTM units from 128 to 64 to simplify the model.
Lowered learning rate to 0.001 for smoother training.
Added EarlyStopping callback to stop training when validation loss stops improving.
Results Interpretation

Before: Training accuracy was 98%, validation accuracy was 70%, showing strong overfitting.

After: Training accuracy dropped to 90%, validation accuracy improved to 87%, indicating better generalization.

Adding dropout, reducing model size, lowering learning rate, and using early stopping helps reduce overfitting and improves validation accuracy.
Bonus Experiment
Try using a pretrained language model like BERT for question answering and fine-tune it on the dataset.
💡 Hint
Use Hugging Face transformers library to load a pretrained BERT model and fine-tune with a smaller learning rate.