Challenge - 5 Problems
NER Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate1:00remaining
What is the main goal of Named Entity Recognition (NER)?
Choose the best description of what Named Entity Recognition does in text processing.
Attempts:
2 left
💡 Hint
Think about what entities like people or locations are in a sentence.
✗ Incorrect
NER finds specific words or phrases that represent real-world objects such as people, organizations, or locations and labels them accordingly.
❓ Predict Output
intermediate1:30remaining
Output of NER model prediction on a sentence
Given the following code using spaCy to perform NER, what is the printed output?
NLP
import spacy nlp = spacy.load('en_core_web_sm') doc = nlp('Apple is looking at buying U.K. startup for $1 billion') for ent in doc.ents: print(ent.text, ent.label_)
Attempts:
2 left
💡 Hint
ORG means organization, GPE means geopolitical entity, MONEY means monetary values.
✗ Incorrect
spaCy's small English model labels 'Apple' as an organization (ORG), 'U.K.' as a geopolitical entity (GPE), and '$1 billion' as money (MONEY).
❓ Model Choice
advanced1:30remaining
Choosing the best model architecture for NER
Which model architecture is most suitable for Named Entity Recognition tasks?
Attempts:
2 left
💡 Hint
NER requires understanding sequences of words and their context.
✗ Incorrect
NER models typically use RNNs or Transformers to process sequences and classify each token into entity categories.
❓ Metrics
advanced1:30remaining
Evaluating NER model performance
Which metric is most appropriate to evaluate the quality of a Named Entity Recognition model?
Attempts:
2 left
💡 Hint
NER evaluation focuses on correctly identifying and classifying entities.
✗ Incorrect
Precision, Recall, and F1-score measure how well the model finds and labels entities correctly, which is essential for NER.
🔧 Debug
expert2:00remaining
Why does this NER model fail to recognize entities correctly?
Consider this Python code snippet using a custom NER model. After training, it predicts no entities on test sentences. What is the most likely cause?
NLP
from transformers import AutoTokenizer, AutoModelForTokenClassification, Trainer, TrainingArguments from datasets import load_dataset model_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name, num_labels=9) # Dataset loading and tokenization omitted for brevity training_args = TrainingArguments(output_dir='./results', num_train_epochs=3) trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train() # Prediction on test sentence inputs = tokenizer('Microsoft was founded by Bill Gates.', return_tensors='pt') outputs = model(**inputs) predictions = outputs.logits.argmax(dim=-1) print(predictions)
Attempts:
2 left
💡 Hint
Pretrained models need fine-tuning on task-specific data to perform well.
✗ Incorrect
Without fine-tuning on labeled NER data, the model's predictions are random or default, resulting in no meaningful entity recognition.