Experiment - Why sequence models understand word order
Problem:We want to understand how sequence models like RNNs or LSTMs learn to recognize the order of words in sentences. Currently, a simple model is trained on a small dataset to classify sentences as positive or negative sentiment, but it treats sentences as bags of words, ignoring word order.
Current Metrics:Training accuracy: 95%, Validation accuracy: 70%
Issue:The model overfits and does not properly use word order, leading to poor validation accuracy. It behaves like a bag-of-words model, missing the sequence information.