0
0
NLPml~10 mins

Why QA systems extract answers in NLP - Test Your Understanding

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to extract the answer from the model's output.

NLP
answer = model.predict(question, context).[1]()
Drag options to blanks, or click blank then click option'
Adecode
Bget_answer
Cstrip
Dextract
Attempts:
3 left
💡 Hint
Common Mistakes
Using methods that do not convert tokens to text, like 'extract' or 'get_answer'.
2fill in blank
medium

Complete the code to tokenize the input question for the QA system.

NLP
inputs = tokenizer.[1](question, return_tensors='pt')
Drag options to blanks, or click blank then click option'
Atransform
Btokenize
Cparse
Dencode
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'tokenize' which returns tokens but not tensor format.
3fill in blank
hard

Fix the error in the code to get the start position of the answer.

NLP
start_pos = outputs.start_logits.[1](dim=1).argmax()
Drag options to blanks, or click blank then click option'
Asum
Bmax
Csoftmax
Dmean
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'max' directly on logits without softmax.
4fill in blank
hard

Fill both blanks to extract the answer text from tokens.

NLP
answer_tokens = inputs.input_ids[0][[1]:[2]]
answer = tokenizer.decode(answer_tokens)
Drag options to blanks, or click blank then click option'
Astart_pos
Bend_pos
C0
Dlen(inputs.input_ids[0])
Attempts:
3 left
💡 Hint
Common Mistakes
Using fixed indices like 0 or full length instead of predicted positions.
5fill in blank
hard

Fill all three blanks to prepare inputs and get the answer from the QA model.

NLP
inputs = tokenizer.[1](question, context, return_tensors='pt')
outputs = model(**inputs)
start_pos = outputs.start_logits.[2](dim=1).argmax()
end_pos = outputs.end_logits.[3](dim=1).argmax()
Drag options to blanks, or click blank then click option'
Aencode
Bsoftmax
Dtokenize
Attempts:
3 left
💡 Hint
Common Mistakes
Skipping softmax or using tokenize instead of encode.