Challenge - 5 Problems
Custom NER Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of training loop snippet for custom NER
What will be the printed output after running this training loop snippet for 3 iterations?
NLP
import spacy from spacy.training.example import Example nlp = spacy.blank('en') ner = nlp.add_pipe('ner') ner.add_label('ANIMAL') optimizer = nlp.begin_training() TRAIN_DATA = [ ('I have a dog', {'entities': [(7, 10, 'ANIMAL')]}), ('She owns a cat', {'entities': [(10, 13, 'ANIMAL')]}) ] for i in range(3): losses = {} for text, annotations in TRAIN_DATA: doc = nlp.make_doc(text) example = Example.from_dict(doc, annotations) nlp.update([example], sgd=optimizer, losses=losses) print(f'Iteration {i+1}, Losses: {losses}')
Attempts:
2 left
💡 Hint
Losses usually decrease as training progresses but start from a positive value.
✗ Incorrect
The losses start positive and decrease as the model learns. They won't be zero initially because the model is blank and needs to learn the entity.
❓ Model Choice
intermediate1:30remaining
Choosing the right model architecture for custom NER
Which model architecture is best suited for training a custom Named Entity Recognition (NER) system from scratch?
Attempts:
2 left
💡 Hint
NER requires understanding context around each word in a sentence.
✗ Incorrect
Transformer-based models like BERT capture context well and are state-of-the-art for NER tasks. CNNs for images or simple feedforward networks lack sequence understanding.
❓ Hyperparameter
advanced1:30remaining
Effect of batch size on custom NER training
What is the most likely effect of increasing the batch size during training of a custom NER model?
Attempts:
2 left
💡 Hint
Think about how many examples the model sees before updating weights.
✗ Incorrect
Larger batch sizes process more data before updating weights, speeding training but sometimes causing less stable updates and poorer generalization.
❓ Metrics
advanced1:30remaining
Choosing the right metric for custom NER evaluation
Which metric best evaluates the performance of a custom NER model on a test set?
Attempts:
2 left
💡 Hint
NER evaluation requires matching whole entities, not just tokens.
✗ Incorrect
Precision, Recall, and F1-score computed on exact entity matches are standard for NER, capturing how well entities are found and classified.
🔧 Debug
expert2:00remaining
Identifying cause of poor entity recognition in custom NER
After training a custom NER model, it fails to recognize any entities in new sentences. Which is the most likely cause?
Attempts:
2 left
💡 Hint
If the model never saw entities during training, it cannot learn to recognize them.
✗ Incorrect
Without entity annotations in training data, the model never learns to detect entities, resulting in zero predictions.