A classification report summarizes key metrics: precision, recall, and F1-score for each class. These metrics help us understand how well the model predicts each category.
Precision tells us how many predicted positives are actually correct. Recall tells us how many actual positives the model found. F1-score balances precision and recall into one number.
Using these metrics together helps us see if the model is making too many false alarms (low precision) or missing important cases (low recall).