0
0
ML Pythonml~8 mins

Polynomial features in ML Python - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Polynomial features
Which metric matters for Polynomial Features and WHY

When using polynomial features, the main goal is to improve the model's ability to capture complex patterns. The key metrics to watch are training loss and validation loss. These tell us how well the model fits the training data and how well it generalizes to new data.

Polynomial features can cause the model to fit training data very well (low training loss), but if validation loss is high, it means the model is overfitting. So, tracking both losses helps us find the right polynomial degree.

Confusion Matrix or Equivalent Visualization

Polynomial features are often used in regression or classification. For classification, we can use a confusion matrix to see prediction results.

    Confusion Matrix Example:

          Predicted
          0    1
    True  --------
     0   | 50 | 10 |
     1   | 5  | 35 |

    TP = 35, FP = 10, TN = 50, FN = 5
    

From this, we calculate precision and recall to understand model quality.

Precision vs Recall Tradeoff with Polynomial Features

Using polynomial features can increase model complexity. This can improve recall (finding more true positives) but might lower precision (more false positives).

For example, in a medical test, a high-degree polynomial might catch more sick patients (high recall) but also wrongly label healthy people as sick (low precision). We must balance these based on what matters more.

Good vs Bad Metric Values for Polynomial Features

Good: Training and validation losses are close and low. Precision and recall are balanced and high (e.g., > 0.8). This means the model generalizes well without overfitting.

Bad: Training loss is very low but validation loss is high. Precision or recall is very low (e.g., < 0.5). This shows overfitting or underfitting due to wrong polynomial degree.

Common Pitfalls with Polynomial Features and Metrics
  • Overfitting: High-degree polynomials fit training data too well but fail on new data.
  • Accuracy Paradox: Accuracy can be misleading if classes are imbalanced.
  • Data Leakage: Using test data to choose polynomial degree causes overly optimistic metrics.
  • Ignoring Validation: Only looking at training loss hides overfitting problems.
Self-Check Question

Your model with polynomial features has 98% accuracy but only 12% recall on the positive class (e.g., fraud). Is this good for production?

Answer: No. The model misses most positive cases (low recall). Even with high accuracy, it fails to catch important cases. You should improve recall before using it in production.

Key Result
Polynomial features improve model fit but require balancing training and validation loss to avoid overfitting.