Experiment - Handling out-of-vocabulary words
Problem:You have a text classification model trained on a fixed vocabulary. When new words appear in test data that the model has never seen before (out-of-vocabulary or OOV words), the model struggles and accuracy drops.
Current Metrics:Training accuracy: 92%, Validation accuracy: 75%, Test accuracy with OOV words: 60%
Issue:The model cannot handle out-of-vocabulary words well, causing poor test accuracy when new words appear.