Complete the code to calculate the RMSE between true and predicted values.
import numpy as np def rmse(y_true, y_pred): return np.sqrt(np.mean((y_true - y_pred)[1]2))
The RMSE formula squares the difference between true and predicted values, so we use the power operator ** with 2.
Complete the code to calculate precision@k for a list of predicted scores and true labels.
def precision_at_k(y_true, y_scores, k): top_k_indices = sorted(range(len(y_scores)), key=lambda i: y_scores[i], reverse=True)[:[1]] relevant = sum(y_true[i] for i in top_k_indices) return relevant / k
k.Precision@k measures how many of the top k predicted items are relevant, so we slice the top k indices.
Fix the error in the RMSE calculation by completing the code.
def rmse(y_true, y_pred): differences = y_true - y_pred squared_diff = differences[1]2 mean_squared_diff = squared_diff.mean() return mean_squared_diff ** 0.5
The differences must be squared using the power operator ** to correctly compute RMSE.
Fill both blanks to complete the precision@k calculation correctly.
def precision_at_k(y_true, y_scores, k): top_k_indices = sorted(range(len(y_scores)), key=lambda i: y_scores[i], reverse=True)[:[1]] relevant = sum(y_true[i] for i in top_k_indices) return relevant [2] k
We select the top k items and divide the count of relevant items by k to get precision@k.
Fill all three blanks to complete the RMSE function with numpy correctly.
import numpy as np def rmse(y_true, y_pred): error = y_true [1] y_pred squared_error = error [2] 2 mean_squared_error = np.[3](squared_error) return np.sqrt(mean_squared_error)
RMSE is calculated by subtracting predictions from true values, squaring the errors, then taking the mean and square root.