0
0
NLPml~5 mins

Custom QA model fine-tuning in NLP

Choose your learning style9 modes available
Introduction
Fine-tuning a custom Question Answering (QA) model helps it learn to answer questions based on your own data, making it more accurate and useful for your specific needs.
You want a model to answer questions about your company documents.
You have a set of FAQs and want a model to answer them better.
You want to build a chatbot that understands your product details.
You need a model to find answers in your own research papers.
You want to improve a general QA model to work well on your data.
Syntax
NLP
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, Trainer, TrainingArguments

# Load pre-trained model and tokenizer
model = AutoModelForQuestionAnswering.from_pretrained('bert-base-uncased')
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')

# Prepare your dataset (questions, contexts, answers)
# Tokenize and format data for training

# Define training arguments
training_args = TrainingArguments(
    output_dir='./results',
    num_train_epochs=3,
    per_device_train_batch_size=8,
    evaluation_strategy='epoch',
    save_strategy='epoch',
    logging_dir='./logs'
)

# Create Trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset
)

# Fine-tune the model
trainer.train()
You need a dataset with questions, contexts, and answers to fine-tune the model.
TrainingArguments control how the model learns, like epochs and batch size.
Examples
This loads a smaller pre-trained QA model and its tokenizer.
NLP
# Example: Load model and tokenizer
model = AutoModelForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad')
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased-distilled-squad')
Set training to run for 2 epochs with batch size 16.
NLP
# Example: Define training arguments
training_args = TrainingArguments(
    output_dir='./qa_model',
    num_train_epochs=2,
    per_device_train_batch_size=16
)
This command starts the fine-tuning process.
NLP
# Example: Start training
trainer.train()
Sample Model
This example fine-tunes a small QA model on a small part of the SQuAD dataset for 1 epoch and prints evaluation metrics.
NLP
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, Trainer, TrainingArguments
from datasets import load_dataset
import torch

# Load SQuAD dataset for example
dataset = load_dataset('squad')

# Load pre-trained model and tokenizer
model_name = 'distilbert-base-uncased-distilled-squad'
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Tokenize function
def preprocess_function(examples):
    questions = [q.strip() for q in examples['question']]
    inputs = tokenizer(questions, examples['context'], truncation=True, padding='max_length', max_length=384, return_offsets_mapping=True)
    start_positions = []
    end_positions = []
    for i, answer in enumerate(examples['answers']):
        start_char = answer['answer_start'][0]
        end_char = start_char + len(answer['text'][0])
        offsets = inputs['offset_mapping'][i]
        # Find start and end token positions
        start_pos = 0
        end_pos = 0
        for idx, (start, end) in enumerate(offsets):
            if start <= start_char < end:
                start_pos = idx
            if start < end_char <= end:
                end_pos = idx
        start_positions.append(start_pos)
        end_positions.append(end_pos)
    inputs['start_positions'] = start_positions
    inputs['end_positions'] = end_positions
    # Remove offset_mapping as it's not needed for training
    inputs.pop('offset_mapping')
    return inputs

# For simplicity, use small subset
small_train = dataset['train'].select(range(100))
small_eval = dataset['validation'].select(range(50))

train_dataset = small_train.map(preprocess_function, batched=True, remove_columns=dataset['train'].column_names)
eval_dataset = small_eval.map(preprocess_function, batched=True, remove_columns=dataset['validation'].column_names)

# Set training arguments
training_args = TrainingArguments(
    output_dir='./qa_finetuned',
    evaluation_strategy='epoch',
    learning_rate=2e-5,
    per_device_train_batch_size=8,
    per_device_eval_batch_size=8,
    num_train_epochs=1,
    weight_decay=0.01,
    logging_dir='./logs',
    logging_steps=10
)

# Create Trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset
)

# Train model
trainer.train()

# Evaluate model
eval_results = trainer.evaluate()
print(f"Evaluation results: {eval_results}")
OutputSuccess
Important Notes
Fine-tuning takes time and needs a good amount of data to improve the model well.
Make sure your questions and answers are clear and correctly aligned with the context.
Use evaluation to check if your model is learning and improving.
Summary
Fine-tuning custom QA models helps answer questions on your own data better.
You need a dataset with questions, contexts, and answers to train the model.
Use Hugging Face Transformers and Trainer to fine-tune easily.