0
0
NLPml~10 mins

Custom NER training basics in NLP - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to import the library needed for Named Entity Recognition (NER) training.

NLP
import [1]
Drag options to blanks, or click blank then click option'
Aspacy
Bnumpy
Cmatplotlib
Dpandas
Attempts:
3 left
💡 Hint
Common Mistakes
Importing unrelated libraries like numpy or pandas.
Forgetting to import the NLP library before training.
2fill in blank
medium

Complete the code to load a blank English model for NER training.

NLP
nlp = spacy.blank('[1]')
Drag options to blanks, or click blank then click option'
Afr
Ben
Cde
Des
Attempts:
3 left
💡 Hint
Common Mistakes
Using other language codes like 'fr' or 'de' when training English NER.
Passing full language names instead of codes.
3fill in blank
hard

Fix the error in adding the NER pipeline component to the model.

NLP
ner = nlp.add_pipe('[1]')
Drag options to blanks, or click blank then click option'
Atextcat
Btagger
Cparser
Dner
Attempts:
3 left
💡 Hint
Common Mistakes
Using unrelated pipeline components like 'textcat' or 'parser'.
Misspelling the component name.
4fill in blank
hard

Fill both blanks to add a new entity label and prepare the optimizer.

NLP
ner.add_label('[1]')
optimizer = nlp.[2]()
Drag options to blanks, or click blank then click option'
APERSON
Btrain
Cbegin_training
DORG
Attempts:
3 left
💡 Hint
Common Mistakes
Using lowercase or unrelated strings as labels.
Calling a non-existent method like 'train()' instead of 'begin_training()'.
5fill in blank
hard

Fill all three blanks to update the model with training data and print the loss.

NLP
for text, annotations in TRAIN_DATA:
    doc = nlp.make_doc(text)
    example = spacy.training.Example.from_dict(doc, [1])
    nlp.update([example], sgd=[2], losses=[3])
Drag options to blanks, or click blank then click option'
Aannotations
Boptimizer
Closses
Dtext
Attempts:
3 left
💡 Hint
Common Mistakes
Passing the wrong variable for annotations or optimizer.
Not tracking losses correctly.