0
0
NLPml~20 mins

Named entity recognition in NLP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
NER Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:00remaining
What is the main goal of Named Entity Recognition (NER)?
Choose the best description of what Named Entity Recognition does in text processing.
ATranslate text from one language to another automatically.
BSummarize long documents into short paragraphs.
CIdentify and classify key information like names, places, and dates in text.
DDetect the sentiment or emotion expressed in text.
Attempts:
2 left
💡 Hint
Think about what entities like people or locations are in a sentence.
Predict Output
intermediate
1:30remaining
Output of NER model prediction on a sentence
Given the following code using spaCy to perform NER, what is the printed output?
NLP
import spacy
nlp = spacy.load('en_core_web_sm')
doc = nlp('Apple is looking at buying U.K. startup for $1 billion')
for ent in doc.ents:
    print(ent.text, ent.label_)
A
Apple ORG
U.K. GPE
$1 billion MONEY
B
Apple ORG
U.K. LOC
$1 billion MONEY
C
Apple PERSON
U.K. LOC
$1 billion QUANTITY
D
Apple ORG
U.K. ORG
$1 billion MONEY
Attempts:
2 left
💡 Hint
ORG means organization, GPE means geopolitical entity, MONEY means monetary values.
Model Choice
advanced
1:30remaining
Choosing the best model architecture for NER
Which model architecture is most suitable for Named Entity Recognition tasks?
AK-Means clustering for unsupervised grouping
BConvolutional Neural Network (CNN) for image classification
CGenerative Adversarial Network (GAN) for data generation
DRecurrent Neural Network (RNN) or Transformer-based models with token-level classification
Attempts:
2 left
💡 Hint
NER requires understanding sequences of words and their context.
Metrics
advanced
1:30remaining
Evaluating NER model performance
Which metric is most appropriate to evaluate the quality of a Named Entity Recognition model?
APrecision, Recall, and F1-score on entity-level
BBLEU score
CAccuracy on sentence classification
DMean Squared Error (MSE)
Attempts:
2 left
💡 Hint
NER evaluation focuses on correctly identifying and classifying entities.
🔧 Debug
expert
2:00remaining
Why does this NER model fail to recognize entities correctly?
Consider this Python code snippet using a custom NER model. After training, it predicts no entities on test sentences. What is the most likely cause?
NLP
from transformers import AutoTokenizer, AutoModelForTokenClassification, Trainer, TrainingArguments
from datasets import load_dataset

model_name = 'bert-base-cased'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name, num_labels=9)

# Dataset loading and tokenization omitted for brevity

training_args = TrainingArguments(output_dir='./results', num_train_epochs=3)
trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset)
trainer.train()

# Prediction on test sentence
inputs = tokenizer('Microsoft was founded by Bill Gates.', return_tensors='pt')
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
print(predictions)
AThe tokenizer is incompatible with the model architecture causing prediction errors.
BThe model was not fine-tuned on labeled NER data, so it cannot predict entities correctly.
CThe number of labels is set incorrectly to 9 instead of 2.
DThe input sentence is too short for the model to detect entities.
Attempts:
2 left
💡 Hint
Pretrained models need fine-tuning on task-specific data to perform well.