0
0
ML Pythonml~20 mins

Evaluation metrics (RMSE, precision@k) in ML Python - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Evaluation Metrics Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Calculate RMSE for given predictions and true values
Given the true values and predicted values below, what is the RMSE (Root Mean Squared Error)?
ML Python
import numpy as np
true_values = np.array([3, -0.5, 2, 7])
predictions = np.array([2.5, 0.0, 2, 8])
rmse = np.sqrt(np.mean((true_values - predictions) ** 2))
print(round(rmse, 3))
A0.500
B0.750
C1.000
D0.612
Attempts:
2 left
💡 Hint
RMSE is the square root of the average of squared differences between true and predicted values.
🧠 Conceptual
intermediate
1:30remaining
Understanding Precision@k in recommendation systems
In a recommendation system, what does precision@3 measure?
AThe fraction of the top 3 recommended items that are relevant to the user
BThe fraction of all relevant items recommended to the user
CThe fraction of users who received at least 3 relevant recommendations
DThe average rating of the top 3 recommended items
Attempts:
2 left
💡 Hint
Precision@k focuses on the top k items recommended and how many are relevant.
Metrics
advanced
1:30remaining
Choosing the right metric for regression error
Which metric is more sensitive to large errors in predictions?
AMean Absolute Percentage Error (MAPE)
BMean Absolute Error (MAE)
CRoot Mean Squared Error (RMSE)
DR-squared (Coefficient of Determination)
Attempts:
2 left
💡 Hint
Think about how squaring errors affects large mistakes.
🔧 Debug
advanced
2:00remaining
Identify the error in precision@k calculation code
What error does the following code produce when calculating precision@k? ```python relevant_items = {1, 3, 5, 7} recommended_items = [2, 3, 4, 5] k = 3 precision_at_k = len(set(recommended_items[:k]) & relevant_items) / len(recommended_items) print(round(precision_at_k, 2)) ```
AIncorrect precision value because denominator should be k, not total recommended items
BZeroDivisionError because len(recommended_items) is zero
CTypeError because set intersection is invalid between list and set
DSyntaxError due to missing colon
Attempts:
2 left
💡 Hint
Precision@k divides by k, not total recommended items length.
Model Choice
expert
2:30remaining
Selecting evaluation metric for imbalanced classification with ranking
You have a highly imbalanced dataset and want to evaluate a model that ranks positive samples higher than negatives. Which metric is best suited?
AAccuracy
BPrecision@k
CRoot Mean Squared Error (RMSE)
DMean Squared Error (MSE)
Attempts:
2 left
💡 Hint
Think about metrics that focus on top-ranked relevant items in imbalanced data.