0
0
NLPml~20 mins

Dependency parsing in NLP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Dependency Parsing Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
Understanding Dependency Parsing Output

Given the sentence: "The cat sat on the mat." and its dependency parse, which of the following correctly represents the head of the word "sat"?

A"sat" is the root of the sentence, so it has no head.
B"sat" has the head "cat" because the cat is doing the action.
C"sat" has the head "on" because it shows location.
D"sat" has the head "mat" because it is the object.
Attempts:
2 left
💡 Hint

Think about which word is the main verb that connects the sentence.

Predict Output
intermediate
2:00remaining
Output of Dependency Parsing with spaCy

What is the output of the following code snippet that uses spaCy to parse the sentence "She enjoys reading books." and prints each token's text and its dependency label?

NLP
import spacy
nlp = spacy.load('en_core_web_sm')
doc = nlp('She enjoys reading books.')
output = [(token.text, token.dep_) for token in doc]
print(output)
A[('She', 'nsubj'), ('enjoys', 'ROOT'), ('reading', 'xcomp'), ('books', 'dobj'), ('.', 'punct')]
B[('She', 'dobj'), ('enjoys', 'nsubj'), ('reading', 'ROOT'), ('books', 'xcomp'), ('.', 'punct')]
C[('She', 'ROOT'), ('enjoys', 'nsubj'), ('reading', 'dobj'), ('books', 'xcomp'), ('.', 'punct')]
D[('She', 'punct'), ('enjoys', 'dobj'), ('reading', 'nsubj'), ('books', 'ROOT'), ('.', 'xcomp')]
Attempts:
2 left
💡 Hint

Remember the subject usually has the label nsubj and the main verb is ROOT.

Model Choice
advanced
2:30remaining
Choosing a Dependency Parsing Model for Low-Resource Language

You want to build a dependency parser for a low-resource language with very limited annotated data. Which model approach is most suitable?

ATrain a large transformer-based parser from scratch on the small dataset.
BUse a rule-based parser handcrafted for the language without machine learning.
CUse a pre-trained multilingual model and fine-tune it on the small dataset.
DTrain a simple linear classifier on raw text without any linguistic features.
Attempts:
2 left
💡 Hint

Think about leveraging existing knowledge from other languages.

Metrics
advanced
1:30remaining
Evaluating Dependency Parsing Accuracy

Which metric best measures how well a dependency parser predicts the correct head for each word in a sentence?

ABLEU score
BPerplexity
CF1 score for named entity recognition
DUnlabeled Attachment Score (UAS)
Attempts:
2 left
💡 Hint

Focus on metrics that evaluate syntactic structure correctness.

🔧 Debug
expert
3:00remaining
Debugging Incorrect Dependency Parse Output

You trained a dependency parser, but it often predicts the root word incorrectly, assigning root to punctuation marks. Which is the most likely cause?

AThe evaluation metric is not suitable for dependency parsing.
BThe training data has incorrect root annotations for punctuation.
CThe tokenizer splits words incorrectly causing misalignment.
DThe model uses a too high learning rate causing overfitting.
Attempts:
2 left
💡 Hint

Check the quality of your training annotations carefully.