Forward propagation is the process where input data moves through a model to produce predictions. The key metrics to check here are loss and accuracy. Loss tells us how far the model's predictions are from the true answers. Accuracy shows how many predictions are correct. These metrics help us know if the model is learning well during training.
Forward propagation in ML Python - Model Metrics & Evaluation
For classification tasks, forward propagation outputs predictions that can be compared to true labels using a confusion matrix:
| Predicted Positive | Predicted Negative |
|--------------------|--------------------|
| True Positive (TP) | False Positive (FP) |
| False Negative (FN) | True Negative (TN) |
This matrix helps calculate precision, recall, and accuracy after forward propagation.
Forward propagation outputs predictions that affect precision and recall. For example, in a spam filter:
- High precision means most emails marked as spam really are spam (few good emails wrongly blocked).
- High recall means most spam emails are caught (few spam emails missed).
Depending on the task, forward propagation should produce predictions that balance these metrics well.
Good: Low loss (close to zero), high accuracy (close to 100%), balanced precision and recall.
Bad: High loss, low accuracy, very low precision or recall indicating poor prediction quality.
- Accuracy paradox: High accuracy can be misleading if classes are imbalanced.
- Data leakage: If test data leaks into training, metrics look unrealistically good.
- Overfitting: Very low training loss but high test loss means model memorizes training data, not generalizing well.
Your model has 98% accuracy but 12% recall on fraud detection. Is it good for production? Why or why not?
Answer: No, it is not good. The low recall means the model misses many fraud cases, which is risky. In fraud detection, catching as many frauds as possible (high recall) is more important than just high accuracy.