Complete the code to calculate the Mean Absolute Error (MAE) between true and predicted values.
from sklearn.metrics import [1] true = [3, -0.5, 2, 7] pred = [2.5, 0.0, 2, 8] mae = [1](true, pred) print(mae)
The Mean Absolute Error (MAE) measures the average absolute difference between true and predicted values. The function mean_absolute_error from sklearn.metrics computes this metric.
Complete the code to calculate the Root Mean Squared Error (RMSE) from the Mean Squared Error (MSE).
from sklearn.metrics import mean_squared_error import numpy as np true = [3, -0.5, 2, 7] pred = [2.5, 0.0, 2, 8] mse = mean_squared_error(true, pred) rmse = np.[1](mse) print(rmse)
The Root Mean Squared Error (RMSE) is the square root of the Mean Squared Error (MSE). The np.sqrt function calculates the square root.
Fix the error in the code to calculate Mean Absolute Percentage Error (MAPE) manually.
true = [100, 200, 300, 400] pred = [110, 190, 310, 420] mape = 100 * sum(abs((true[i] - pred[i]) / [1]) for i in range(len(true))) / len(true) print(mape)
Mean Absolute Percentage Error (MAPE) divides the absolute error by the true value at each point. So the denominator should be true[i].
Fill both blanks to create a dictionary of errors where keys are error names and values are their computed scores.
from sklearn.metrics import mean_absolute_error, mean_squared_error true = [1, 2, 3, 4] pred = [1.1, 1.9, 3.2, 3.8] errors = { 'MAE': [1](true, pred), 'MSE': [2](true, pred) } print(errors)
The dictionary stores the Mean Absolute Error (MAE) and Mean Squared Error (MSE) calculated using their respective functions.
Fill all three blanks to create a dictionary comprehension that maps each error name to its score, filtering only errors with score less than 0.5.
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score true = [2, 4, 6, 8] pred = [2.1, 3.9, 6.2, 7.8] errors = { [1]: [2](true, pred) for [3] in ['mean_absolute_error', 'mean_squared_error', 'r2_score'] if [2](true, pred) < 0.5 } print(errors)
The dictionary comprehension uses error_name as key, calls the function by name using globals()[error_name], and iterates over error_name in the list of metric names. It filters errors with score less than 0.5.