0
0
NLPml~10 mins

Custom QA model fine-tuning in NLP - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to load the pre-trained QA model.

NLP
from transformers import AutoModelForQuestionAnswering
model = AutoModelForQuestionAnswering.from_pretrained([1])
Drag options to blanks, or click blank then click option'
A"vgg16"
B"gpt2"
C"bert-base-uncased"
D"resnet50"
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing image classification models like 'resnet50' or 'vgg16'.
Using language models not fine-tuned for QA like 'gpt2'.
2fill in blank
medium

Complete the code to prepare the tokenizer for the QA model.

NLP
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained([1])
Drag options to blanks, or click blank then click option'
A"bert-base-uncased"
B"bert-base-cased"
C"roberta-base"
D"distilbert-base-uncased"
Attempts:
3 left
💡 Hint
Common Mistakes
Using a tokenizer that does not match the model architecture.
Choosing a cased tokenizer when the model is uncased.
3fill in blank
hard

Fix the error in the training loop to correctly compute loss.

NLP
outputs = model(**inputs)
loss = outputs.[1]
Drag options to blanks, or click blank then click option'
Aattentions
Bhidden_states
Clogits
Dloss
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'logits' instead of 'loss' to compute training loss.
Accessing 'hidden_states' or 'attentions' which are not loss values.
4fill in blank
hard

Fill both blanks to tokenize the question and context correctly.

NLP
inputs = tokenizer([1], [2], truncation=True, padding=True, return_tensors="pt")
Drag options to blanks, or click blank then click option'
A"What is AI?"
B"AI stands for Artificial Intelligence."
Cquestion
Dcontext
Attempts:
3 left
💡 Hint
Common Mistakes
Passing literal strings instead of variables.
Swapping question and context order.
5fill in blank
hard

Fill all three blanks to extract the answer span from model outputs.

NLP
answer_start = torch.argmax(outputs.start_logits, dim=[1])
answer_end = torch.argmax(outputs.end_logits, dim=[2])
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(inputs.input_ids[0][answer_start:answer_end+[3]]))
Drag options to blanks, or click blank then click option'
A0
B1
Attempts:
3 left
💡 Hint
Common Mistakes
Using dim=0 which is batch dimension.
Not adding 1 to the end index causing incomplete answer.