Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to load the pre-trained QA model.
NLP
from transformers import AutoModelForQuestionAnswering model = AutoModelForQuestionAnswering.from_pretrained([1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing image classification models like 'resnet50' or 'vgg16'.
Using language models not fine-tuned for QA like 'gpt2'.
✗ Incorrect
The 'bert-base-uncased' model is a common pre-trained model for question answering tasks.
2fill in blank
mediumComplete the code to prepare the tokenizer for the QA model.
NLP
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained([1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using a tokenizer that does not match the model architecture.
Choosing a cased tokenizer when the model is uncased.
✗ Incorrect
The tokenizer must match the model; here, 'bert-base-uncased' matches the model used.
3fill in blank
hardFix the error in the training loop to correctly compute loss.
NLP
outputs = model(**inputs)
loss = outputs.[1] Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'logits' instead of 'loss' to compute training loss.
Accessing 'hidden_states' or 'attentions' which are not loss values.
✗ Incorrect
The 'loss' attribute contains the computed loss during training.
4fill in blank
hardFill both blanks to tokenize the question and context correctly.
NLP
inputs = tokenizer([1], [2], truncation=True, padding=True, return_tensors="pt")
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing literal strings instead of variables.
Swapping question and context order.
✗ Incorrect
The tokenizer takes the question and context variables as inputs to prepare the model input.
5fill in blank
hardFill all three blanks to extract the answer span from model outputs.
NLP
answer_start = torch.argmax(outputs.start_logits, dim=[1]) answer_end = torch.argmax(outputs.end_logits, dim=[2]) answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(inputs.input_ids[0][answer_start:answer_end+[3]]))
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using dim=0 which is batch dimension.
Not adding 1 to the end index causing incomplete answer.
✗ Incorrect
The logits are 2D tensors; argmax is taken over dimension 1 (sequence length). The answer span includes the end token, so add 1.