Model Pipeline - Tokenization in spaCy
This pipeline breaks down text into smaller pieces called tokens using spaCy. Tokens are like words or punctuation marks, which help computers understand and work with language.
This pipeline breaks down text into smaller pieces called tokens using spaCy. Tokens are like words or punctuation marks, which help computers understand and work with language.
Tokenization does not involve training, so no convergence chart.
| Epoch | Loss ↓ | Accuracy ↑ | Observation |
|---|---|---|---|
| 1 | N/A | N/A | Tokenization is a rule-based process, so no training loss or accuracy applies. |