Input shape specification itself is not a metric but a design step. However, correct input shape ensures the model can learn properly. If input shape is wrong, the model will fail to train or give errors. So, the key metric to watch after specifying input shape is training loss and validation loss. If these do not improve, input shape might be incorrect or data mismatched.
Input shape specification in TensorFlow - Model Metrics & Evaluation
Input shape specification does not produce a confusion matrix directly. But if input shape is wrong, the model may not train well, leading to poor confusion matrix results later. For example, if input shape mismatch causes wrong predictions, the confusion matrix will show many false positives and false negatives.
Confusion Matrix Example (after training with correct input shape):
Predicted
Pos Neg
Pos 50 10
Neg 5 35
TP=50, FP=10, FN=5, TN=35
Choosing the right input shape is like choosing the right size of clothes. Too small or too big won't fit well. If input shape is too small (missing data), model misses important info (low recall). If too big (extra noise), model confuses itself (low precision). The tradeoff is to pick the shape that fits data well for best learning.
Good: Model trains without errors, training and validation loss decrease steadily, and accuracy improves. Input shape matches data dimensions exactly.
Bad: Model throws shape mismatch errors, training loss stays high or NaN, validation loss does not improve, or model predictions are random. Input shape does not match data.
- Confusing batch size with input shape. Input shape excludes batch size.
- For images, forgetting to include channels (e.g., RGB = 3 channels).
- Using inconsistent input shapes between training and inference data.
- Not reshaping data properly before feeding to model.
- Ignoring the difference between sequence length and feature size in time series.
Your model has 98% accuracy but 12% recall on fraud detection. Is it good?
Answer: No. The input shape might be correct, but the model misses most fraud cases (low recall). This means the model is not catching fraud well, which is dangerous. You should check data, input shape, and model design to improve recall.