0
0
ML Pythonml~8 mins

Matrix factorization basics in ML Python - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Matrix factorization basics
Which metric matters for Matrix Factorization and WHY

Matrix factorization is often used to predict missing values, like in recommendation systems. The key metric to check is Root Mean Squared Error (RMSE) or Mean Absolute Error (MAE). These metrics measure how close the predicted values are to the actual values. Lower RMSE or MAE means better predictions. We use these because matrix factorization predicts continuous values, so accuracy or classification metrics don't fit here.

Confusion Matrix or Equivalent Visualization

Matrix factorization predicts numbers, not categories, so confusion matrix does not apply. Instead, we look at error values like RMSE or MAE.

    Example:
    Actual ratings:    [4, 5, 3, 2]
    Predicted ratings: [3.8, 4.9, 2.7, 2.1]

    Calculate RMSE:
    RMSE = sqrt(((4-3.8)^2 + (5-4.9)^2 + (3-2.7)^2 + (2-2.1)^2) / 4)
         = sqrt((0.04 + 0.01 + 0.09 + 0.01) / 4)
         = sqrt(0.15 / 4) = sqrt(0.0375) ≈ 0.193
    
Tradeoff: Error Metrics and Model Complexity

Matrix factorization models can be simple or complex. A simple model might not capture enough detail, causing high error (underfitting). A complex model fits training data well but may predict poorly on new data (overfitting).

We balance this by checking error on training and test data. If training error is low but test error is high, the model is overfitting. If both errors are high, the model is too simple.

Choosing the right number of factors (dimensions) is key. Too few factors = high error. Too many factors = overfitting.

What "Good" vs "Bad" Metric Values Look Like

Good matrix factorization results have low RMSE or MAE, meaning predictions are close to actual values.

  • Good: RMSE around 0.1 to 0.3 on test data (depends on rating scale)
  • Bad: RMSE above 1.0 means predictions are far off

Also, the difference between training and test error should be small. Large gaps mean overfitting or underfitting.

Common Pitfalls in Matrix Factorization Metrics
  • Ignoring test error: Only checking training error can hide overfitting.
  • Using classification metrics: Accuracy or confusion matrix don't apply to continuous predictions.
  • Data leakage: If test data leaks into training, error looks artificially low.
  • Not tuning factors: Using too many or too few latent factors without validation hurts performance.
Self-Check Question

Your matrix factorization model has a training RMSE of 0.05 but a test RMSE of 1.2. Is it good for production? Why or why not?

Answer: No, it is not good. The very low training error but high test error means the model overfits the training data and will not predict well on new data.

Key Result
For matrix factorization, low RMSE or MAE on test data shows good prediction accuracy and balanced model complexity.