LDA (Latent Dirichlet Allocation) is a topic modeling method. It groups words into topics from text data. Since it is unsupervised, we don't have labels to check accuracy. Instead, we use perplexity and topic coherence to see how well the model finds meaningful topics.
Perplexity measures how well the model predicts new text. Lower perplexity means better prediction. Topic coherence checks if words in a topic make sense together. Higher coherence means clearer topics.
These metrics help us decide if the model finds useful topics or just random word groups.