Pooling layers help reduce image size and keep important features. We check accuracy and loss of the whole model to see if pooling helps. Good pooling keeps key info, so accuracy stays high and loss stays low.
Pooling layers (MaxPool, AvgPool) in TensorFlow - Model Metrics & Evaluation
Confusion Matrix Example (for classification model using pooling):
| Actual \ Predicted | Positive | Negative |
|--------------------|----------|----------|
| Positive | 85 | 15 |
| Negative | 10 | 90 |
TP = 85, FP = 10, TN = 90, FN = 15
Total samples = 85 + 10 + 90 + 15 = 200
Precision = 85 / (85 + 10) = 0.895
Recall = 85 / (85 + 15) = 0.85
F1 Score = 2 * (0.895 * 0.85) / (0.895 + 0.85) ≈ 0.872
Pooling layers reduce data size but can lose details. If pooling is too strong, recall may drop because small features vanish. For example, in face recognition, missing a face (low recall) is worse than a few false alarms (precision). So, choose pooling that keeps recall high.
MaxPool keeps strongest signals, helping recall. AvgPool smooths features, which may lower recall but improve precision in some cases.
Good: Accuracy above 85%, balanced precision and recall around 85% or more, showing pooling kept important info.
Bad: Accuracy below 70%, recall much lower than precision (e.g., recall 50%, precision 90%), meaning pooling lost key features and model misses many positives.
- Accuracy paradox: High accuracy can hide poor recall if data is unbalanced.
- Over-pooling: Too much pooling loses details, lowering recall and hurting model.
- Data leakage: Metrics look good if test data leaks into training, not related to pooling but important to check.
Your model with pooling layers has 98% accuracy but only 12% recall on fraud cases. Is it good for production? Why or why not?
Answer: No, it is not good. The model misses most fraud cases (low recall), which is dangerous. Pooling may have removed important fraud signals. You need to improve recall even if accuracy is high.