Experiment - Text generation with RNN
Problem:Generate text sequences using a Recurrent Neural Network (RNN) trained on a small dataset of Shakespeare-like text.
Current Metrics:Training loss: 0.15, Validation loss: 0.45, Training accuracy: 92%, Validation accuracy: 70%
Issue:The model is overfitting: training accuracy is high but validation accuracy is much lower, indicating poor generalization.