Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to extract the answer from the model's output.
NLP
answer = model.predict(question, context).[1]() Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using methods that do not convert tokens to text, like 'extract' or 'get_answer'.
✗ Incorrect
The decode() method converts the model's output tokens into a readable string answer.
2fill in blank
mediumComplete the code to tokenize the input question for the QA system.
NLP
inputs = tokenizer.[1](question, return_tensors='pt')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'tokenize' which returns tokens but not tensor format.
✗ Incorrect
The encode() method converts the question text into tokens the model can understand.
3fill in blank
hardFix the error in the code to get the start position of the answer.
NLP
start_pos = outputs.start_logits.[1](dim=1).argmax()
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'max' directly on logits without softmax.
✗ Incorrect
The softmax() function converts logits to probabilities before finding the max index.
4fill in blank
hardFill both blanks to extract the answer text from tokens.
NLP
answer_tokens = inputs.input_ids[0][[1]:[2]] answer = tokenizer.decode(answer_tokens)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using fixed indices like 0 or full length instead of predicted positions.
✗ Incorrect
We slice tokens from start_pos to end_pos to get the answer span.
5fill in blank
hardFill all three blanks to prepare inputs and get the answer from the QA model.
NLP
inputs = tokenizer.[1](question, context, return_tensors='pt') outputs = model(**inputs) start_pos = outputs.start_logits.[2](dim=1).argmax() end_pos = outputs.end_logits.[3](dim=1).argmax()
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Skipping softmax or using tokenize instead of encode.
✗ Incorrect
We encode inputs, then apply softmax to logits before argmax to find answer positions.