For sequence models that understand word order, metrics like perplexity and sequence accuracy matter most. Perplexity measures how well the model predicts the next word in a sequence, showing if it captures word order patterns. Sequence accuracy checks if the entire predicted sequence matches the true sequence, reflecting understanding of word order. These metrics help us know if the model truly learns the order of words, not just individual words.
Why sequence models understand word order in NLP - Why Metrics Matter
True Sequence: I love machine learning Predicted Seq: I love learning machine Sequence Accuracy: 0/1 = 0.0 (incorrect order) Word Accuracy: 2/4 = 0.5 (words correct but order wrong)
This shows the model predicted correct words but in wrong order, so sequence accuracy is low even if word accuracy is higher.
In sequence models, the tradeoff is between predicting correct words (precision) and predicting all needed words in correct order (recall). For example, a model might predict only common words (high precision) but miss rare words or order (low recall). Or it might guess many words to cover all (high recall) but include wrong words (low precision). Good sequence models balance this to get correct words in the right order.
Good: Low perplexity (close to 1), high sequence accuracy (above 80%), showing the model predicts correct words in order.
Bad: High perplexity (much greater than 1), low sequence accuracy (below 50%), meaning the model struggles to predict correct word order even if some words are right.
- Ignoring order: Using only word-level accuracy misses if order is wrong.
- Data leakage: Training and test sequences overlapping can falsely lower perplexity.
- Overfitting: Very low perplexity on training but high on test means model memorizes sequences, not generalizes order.
This question is about fraud detection, not sequence models. But to connect: high accuracy with very low recall means the model misses most fraud cases. For fraud, recall is critical because missing fraud is costly. So despite high accuracy, this model is not good for production fraud detection.