0
0
NLPml~20 mins

QA with Hugging Face pipeline in NLP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
QA Pipeline Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of a simple QA pipeline
What is the output of this code snippet using Hugging Face's QA pipeline?
NLP
from transformers import pipeline
qa = pipeline('question-answering')
context = "The Eiffel Tower is located in Paris."
question = "Where is the Eiffel Tower located?"
result = qa(question=question, context=context)
print(result['answer'])
A"Paris"
B"Eiffel Tower"
C"located"
D"The Eiffel Tower"
Attempts:
2 left
💡 Hint
The pipeline extracts the answer span from the context that best answers the question.
Model Choice
intermediate
2:00remaining
Choosing the right model for QA pipeline
Which model is best suited for a question-answering pipeline that requires understanding context and providing precise answers?
A"bert-base-uncased"
B"roberta-base"
C"distilbert-base-uncased-distilled-squad"
D"gpt2"
Attempts:
2 left
💡 Hint
Look for a model fine-tuned specifically on a QA dataset like SQuAD.
Hyperparameter
advanced
2:00remaining
Effect of changing top_k in QA pipeline
In the Hugging Face QA pipeline, what happens if you set the parameter top_k=3 when calling the pipeline?
AThe pipeline returns the top 3 most probable answers instead of just one.
BThe pipeline limits the context to the first 3 sentences.
CThe pipeline uses only 3 tokens from the question for answering.
DThe pipeline runs 3 times and returns the last answer.
Attempts:
2 left
💡 Hint
top_k controls how many answers the model returns.
Metrics
advanced
2:00remaining
Evaluating QA model performance
Which metric is commonly used to evaluate the accuracy of a question-answering model on datasets like SQuAD?
ABLEU score
BExact Match (EM) score
CMean Squared Error (MSE)
DPerplexity
Attempts:
2 left
💡 Hint
This metric measures if the predicted answer exactly matches the ground truth.
🔧 Debug
expert
2:00remaining
Debugging a QA pipeline error
You run this code but get a TypeError: 'NoneType' object is not subscriptable. What is the cause? from transformers import pipeline qa = pipeline('question-answering') context = None question = "What is AI?" result = qa(question=question, context=context) print(result['answer'])
AThe model does not support question-answering.
BThe question variable is None, causing the error.
CThe pipeline is not initialized correctly.
DThe context variable is None, so the pipeline cannot find an answer.
Attempts:
2 left
💡 Hint
Check the input types passed to the pipeline.