0
0
NLPml~20 mins

Open-domain QA basics in NLP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Open-domain QA Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
What is the main difference between open-domain QA and closed-domain QA?

Open-domain QA systems answer questions from any topic using a large knowledge source. Closed-domain QA systems focus on a specific topic or dataset.

Which statement best describes the main difference?

AClosed-domain QA uses large external databases, while open-domain QA only uses small, fixed datasets.
BOpen-domain QA requires no training, but closed-domain QA always requires training on labeled data.
COpen-domain QA can answer questions about any topic, while closed-domain QA is limited to a specific subject area.
DClosed-domain QA systems are always faster than open-domain QA systems.
Attempts:
2 left
💡 Hint

Think about the scope of questions each system can handle.

Predict Output
intermediate
2:00remaining
What is the output of this simple retrieval step in open-domain QA?

Given a list of documents and a query, the code below finds documents containing the query word.

documents = ["The sky is blue.", "Grass is green.", "The sun is bright."]
query = "sky"
retrieved = [doc for doc in documents if query in doc]
print(retrieved)

What is printed?

A["The sky is blue."]
B["Grass is green."]
C["The sun is bright."]
D[]
Attempts:
2 left
💡 Hint

Check which document contains the word 'sky'.

Model Choice
advanced
2:00remaining
Which model architecture is best suited for open-domain QA tasks?

Open-domain QA often requires understanding questions and retrieving answers from large text collections.

Which model architecture is most appropriate?

AA simple linear regression model.
BA convolutional neural network trained on image classification.
CA recurrent neural network trained for speech recognition.
DA sequence-to-sequence transformer model fine-tuned for question answering.
Attempts:
2 left
💡 Hint

Consider models designed for text understanding and generation.

Metrics
advanced
2:00remaining
Which metric is most appropriate to evaluate an open-domain QA system's answer quality?

Open-domain QA systems produce text answers to questions. Which metric below best measures answer correctness?

AAccuracy of classifying images into categories.
BExact Match (EM) score measuring if predicted answer exactly matches the ground truth.
CBLEU score used for machine translation quality.
DMean Squared Error (MSE) between predicted and true numerical values.
Attempts:
2 left
💡 Hint

Think about metrics that compare predicted text answers to correct answers.

🔧 Debug
expert
3:00remaining
Why does this open-domain QA retrieval code return an empty list?

Consider this code snippet for retrieving documents containing a query word:

documents = ["The sky is blue.", "Grass is green.", "The sun is bright."]
query = "Sky"
retrieved = [doc for doc in documents if query in doc]
print(retrieved)

Why is the output an empty list?

ABecause the query 'Sky' has uppercase 'S' but documents contain lowercase 'sky', so the match fails due to case sensitivity.
BBecause the documents list is empty.
CBecause the query word is not a string.
DBecause the list comprehension syntax is incorrect.
Attempts:
2 left
💡 Hint

Check if the query matches the document text exactly including letter case.