Experiment - N-gram language models
Problem:Build an N-gram language model to predict the next word in a sentence using a small text dataset.
Current Metrics:Perplexity on test set: 150.0
Issue:The model has high perplexity, indicating poor prediction quality and overfitting on training data.