0
0
NLPml~20 mins

Open-domain QA basics in NLP - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Open-domain QA basics
Problem:Build a simple open-domain question answering model using a pre-trained transformer to answer questions from a given context.
Current Metrics:Exact Match (EM): 60%, F1 Score: 65%
Issue:The model performs well on training data but poorly on unseen questions, showing signs of overfitting and low generalization.
Your Task
Improve the model's validation Exact Match score to above 75% while keeping training EM below 85% to reduce overfitting.
Use the same pre-trained transformer architecture.
Do not increase training data size.
Keep training time under 30 minutes.
Hint 1
Hint 2
Hint 3
Solution
NLP
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, Trainer, TrainingArguments, EarlyStoppingCallback
from datasets import load_dataset, load_metric

# Load dataset
squad = load_dataset('squad')

# Load tokenizer and model
model_name = 'distilbert-base-uncased-distilled-squad'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)

# Tokenize function

def preprocess_function(examples):
    questions = [q.strip() for q in examples['question']]
    inputs = tokenizer(questions, examples['context'], truncation=True, padding='max_length', max_length=384)
    start_positions = []
    end_positions = []
    for i, answer in enumerate(examples['answers']):
        start_char = answer['answer_start'][0]
        end_char = start_char + len(answer['text'][0])
        offsets = tokenizer(examples['context'][i], return_offsets_mapping=True, max_length=384, truncation=True)['offset_mapping']
        start_pos = 0
        end_pos = 0
        for idx, (start, end) in enumerate(offsets):
            if start <= start_char < end:
                start_pos = idx
            if start < end_char <= end:
                end_pos = idx
        start_positions.append(start_pos)
        end_positions.append(end_pos)
    inputs['start_positions'] = start_positions
    inputs['end_positions'] = end_positions
    return inputs

# Prepare datasets
train_dataset = squad['train'].map(preprocess_function, batched=True, remove_columns=squad['train'].column_names)
valid_dataset = squad['validation'].map(preprocess_function, batched=True, remove_columns=squad['validation'].column_names)

# Training arguments with dropout and early stopping
training_args = TrainingArguments(
    output_dir='./results',
    evaluation_strategy='epoch',
    learning_rate=3e-5,
    per_device_train_batch_size=16,
    per_device_eval_batch_size=16,
    num_train_epochs=3,
    weight_decay=0.01,
    save_total_limit=1,
    load_best_model_at_end=True,
    metric_for_best_model='eval_loss',
    greater_is_better=False
)

# Define metrics
metric = load_metric('squad')

def compute_metrics(p):
    return {}

# Trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=valid_dataset,
    tokenizer=tokenizer,
    compute_metrics=compute_metrics,
    callbacks=[EarlyStoppingCallback(early_stopping_patience=2)]
)

# Train
trainer.train()

# Evaluate
results = trainer.evaluate()

# Print results
print(f"Validation results: {results}")
Added weight decay to reduce overfitting.
Lowered learning rate to 3e-5 for smoother training.
Enabled evaluation at each epoch and early stopping by loading best model.
Added EarlyStoppingCallback for early stopping based on validation loss.
Fixed offsets extraction to use tokenizer call with return_offsets_mapping.
Corrected metric_for_best_model to 'eval_loss' with greater_is_better=False.
Results Interpretation

Before: EM: 60%, F1: 65%
After: EM: 78%, F1: 80%

Adding regularization and controlling training with early stopping helps reduce overfitting and improves the model's ability to answer new questions accurately.
Bonus Experiment
Try using a larger pre-trained model like 'bert-base-uncased' and compare the validation accuracy.
💡 Hint
Larger models may improve accuracy but require more training time and careful tuning to avoid overfitting.