0
0
ML Pythonml~5 mins

Evaluation metrics (RMSE, precision@k) in ML Python

Choose your learning style9 modes available
Introduction

We use evaluation metrics to check how well a machine learning model is doing. RMSE tells us how close predictions are to actual values, and precision@k shows how many of the top k predictions are correct.

When predicting continuous numbers like house prices or temperatures (use RMSE).
When recommending top items to users, like movies or products (use precision@k).
When comparing different models to pick the best one.
When you want to understand if your model is improving during training.
When you want to explain model quality to others in simple terms.
Syntax
ML Python
RMSE = sqrt(mean((y_true - y_pred)**2))

precision@k = (Number of relevant items in top k predictions) / k

RMSE measures average error size in the same units as the target.

Precision@k focuses on the top k results, useful for ranking or recommendation tasks.

Examples
Calculate RMSE for simple numeric predictions.
ML Python
from sklearn.metrics import mean_squared_error
import numpy as np

y_true = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 8]
rmse = np.sqrt(mean_squared_error(y_true, y_pred))
print(rmse)
Calculate precision@3 for a list of recommended items.
ML Python
def precision_at_k(y_true, y_pred, k):
    top_k = y_pred[:k]
    relevant = sum([1 for item in top_k if item in y_true])
    return relevant / k

# Example
relevant_items = ['apple', 'banana', 'orange']
predicted_items = ['banana', 'apple', 'grape', 'orange']
print(precision_at_k(relevant_items, predicted_items, 3))
Sample Model

This program calculates RMSE for numeric predictions and precision@3 for recommended items, showing how close predictions are and how many top recommendations are correct.

ML Python
from sklearn.metrics import mean_squared_error
import numpy as np

def precision_at_k(y_true, y_pred, k):
    top_k = y_pred[:k]
    relevant = sum([1 for item in top_k if item in y_true])
    return relevant / k

# RMSE example
actual = [10, 20, 30, 40, 50]
predicted = [12, 18, 33, 37, 52]
rmse_value = np.sqrt(mean_squared_error(actual, predicted))

# precision@k example
true_items = ['cat', 'dog', 'rabbit']
predicted_items = ['dog', 'rabbit', 'horse', 'cat', 'mouse']
precision_value = precision_at_k(true_items, predicted_items, 3)

print(f"RMSE: {rmse_value:.2f}")
print(f"Precision@3: {precision_value:.2f}")
OutputSuccess
Important Notes

RMSE is sensitive to large errors because it squares the differences.

Precision@k does not consider the order beyond the top k items or how many relevant items exist beyond k.

Always choose metrics that fit your problem type: regression or ranking.

Summary

RMSE measures average prediction error size for numbers.

Precision@k measures accuracy of top k predictions in ranking tasks.

Use these metrics to understand and improve your model's performance.