Named Entity Recognition (NER) finds names like people, places, or dates in text. We want to know how well the model finds these names correctly.
Precision tells us how many found names are actually correct. High precision means few wrong names.
Recall tells us how many real names the model found out of all names. High recall means few missed names.
F1 score balances precision and recall. It is the best single number to see overall NER quality.
We use these because NER is about finding exact names, so both missing names and wrong names matter.