We use evaluation metrics to check how well a machine learning model is doing. RMSE tells us how close predictions are to actual values, and precision@k shows how many of the top k predictions are correct.
Evaluation metrics (RMSE, precision@k) in ML Python
RMSE = sqrt(mean((y_true - y_pred)**2)) precision@k = (Number of relevant items in top k predictions) / k
RMSE measures average error size in the same units as the target.
Precision@k focuses on the top k results, useful for ranking or recommendation tasks.
from sklearn.metrics import mean_squared_error import numpy as np y_true = [3, -0.5, 2, 7] y_pred = [2.5, 0.0, 2, 8] rmse = np.sqrt(mean_squared_error(y_true, y_pred)) print(rmse)
def precision_at_k(y_true, y_pred, k): top_k = y_pred[:k] relevant = sum([1 for item in top_k if item in y_true]) return relevant / k # Example relevant_items = ['apple', 'banana', 'orange'] predicted_items = ['banana', 'apple', 'grape', 'orange'] print(precision_at_k(relevant_items, predicted_items, 3))
This program calculates RMSE for numeric predictions and precision@3 for recommended items, showing how close predictions are and how many top recommendations are correct.
from sklearn.metrics import mean_squared_error import numpy as np def precision_at_k(y_true, y_pred, k): top_k = y_pred[:k] relevant = sum([1 for item in top_k if item in y_true]) return relevant / k # RMSE example actual = [10, 20, 30, 40, 50] predicted = [12, 18, 33, 37, 52] rmse_value = np.sqrt(mean_squared_error(actual, predicted)) # precision@k example true_items = ['cat', 'dog', 'rabbit'] predicted_items = ['dog', 'rabbit', 'horse', 'cat', 'mouse'] precision_value = precision_at_k(true_items, predicted_items, 3) print(f"RMSE: {rmse_value:.2f}") print(f"Precision@3: {precision_value:.2f}")
RMSE is sensitive to large errors because it squares the differences.
Precision@k does not consider the order beyond the top k items or how many relevant items exist beyond k.
Always choose metrics that fit your problem type: regression or ranking.
RMSE measures average prediction error size for numbers.
Precision@k measures accuracy of top k predictions in ranking tasks.
Use these metrics to understand and improve your model's performance.