0
0
ML Pythonml~8 mins

Polynomial regression pipeline in ML Python - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Polynomial regression pipeline
Which metric matters for Polynomial Regression and WHY

For polynomial regression, we want to measure how close our predicted values are to the actual values. The key metric is Mean Squared Error (MSE) or Root Mean Squared Error (RMSE). These metrics tell us the average squared difference between predicted and true values. Lower values mean better predictions.

We also look at R-squared (R²), which shows how much of the variation in the data our model explains. R² ranges from 0 to 1, where 1 means perfect prediction.

Confusion Matrix or Equivalent Visualization

Polynomial regression is a regression task, not classification, so we do not use a confusion matrix. Instead, we visualize results with a plot:

    Actual values:    *   *     *  *    *
    Predicted curve:  ---\___/---\___/---
    
    The closer the curve is to the stars, the better the model.

This visual helps us see if the model fits the data well or misses important patterns.

Tradeoff: Underfitting vs Overfitting

Polynomial regression degree controls model complexity:

  • Low degree (e.g., 1 or 2): Model is simple and may miss patterns (underfitting). MSE will be high.
  • High degree (e.g., 10+): Model fits training data very closely but may fail on new data (overfitting). Training MSE is low but test MSE is high.

We want to find a degree that balances this tradeoff, giving low error on both training and new data.

What "Good" vs "Bad" Metric Values Look Like

Good polynomial regression results:

  • Low MSE/RMSE: Close to 0, meaning predictions are near actual values.
  • High R²: Close to 1, meaning model explains most data variation.

Bad results:

  • High MSE/RMSE: Large errors, model predictions far from actual.
  • Low or negative R²: Model explains little or no variation, worse than just guessing the average.
Common Pitfalls in Polynomial Regression Metrics
  • Overfitting: Very low training error but high test error means model memorizes noise, not true pattern.
  • Ignoring test data: Only checking training error can mislead about real performance.
  • Using R² alone: High R² on training data can hide overfitting; always check test error.
  • Data leakage: Using future or test data in training inflates metrics falsely.
Self-Check Question

Your polynomial regression model has a training RMSE of 0.5 but a test RMSE of 5.0. Is this good? Why or why not?

Answer: This is not good. The large difference means the model fits training data well but performs poorly on new data. This is overfitting. You should try a simpler model or use regularization.

Key Result
Mean Squared Error and R-squared are key metrics to evaluate polynomial regression, balancing fit quality and generalization.