When we add engineered features, we want to see if the model predicts better. Common metrics to check are accuracy for general correctness, precision and recall to understand how well the model finds true positives without many mistakes, and F1 score to balance precision and recall. These metrics show if new features help the model make clearer decisions.
Why engineered features improve models in ML Python - Why Metrics Matter
Without engineered features:
TP=70 FP=30
FN=40 TN=160
With engineered features:
TP=85 FP=15
FN=25 TN=175
Total samples = 300
Explanation:
- TP (True Positives): Correctly found positive cases
- FP (False Positives): Mistakenly marked negatives as positive
- FN (False Negatives): Missed positive cases
- TN (True Negatives): Correctly found negative cases
Adding engineered features often helps the model find more true positives (higher recall) and reduce false alarms (higher precision).
Example: In email spam detection, engineered features like word counts or sender reputation help the model catch more spam (higher recall) without marking good emails as spam (higher precision).
Sometimes improving one metric lowers the other. Good features help improve both, making the model more reliable.
Good: Precision and recall both above 80%, showing the model finds most positives and makes few mistakes.
Bad: High accuracy but low recall (e.g., 95% accuracy but 30% recall) means the model misses many positives, which is risky.
Engineered features should help move metrics from bad to good by giving the model clearer clues.
- Overfitting: Features too tailored to training data can make metrics look great but fail on new data.
- Data leakage: Features that accidentally include future info inflate metrics falsely.
- Accuracy paradox: High accuracy can hide poor recall if data is unbalanced.
- Ignoring metric balance: Only improving precision or recall alone may not help overall model usefulness.
Your model has 98% accuracy but only 12% recall on fraud cases. Is it good for production? Why or why not?
Answer: No, it is not good. The model misses 88% of fraud cases (low recall), which is dangerous. High accuracy is misleading because fraud is rare. You need better features or methods to improve recall.