Complete the code to calculate the mean absolute error between true and predicted values.
from sklearn.metrics import [1] true = [3, -0.5, 2, 7] pred = [2.5, 0.0, 2, 8] error = [1](true, pred) print(error)
The mean_absolute_error function calculates the average absolute difference between true and predicted values, which is useful for error analysis.
Complete the code to identify samples where the prediction error is greater than 1.0.
errors = [abs(t - p) for t, p in zip(true, pred)] high_error_indices = [i for i, e in enumerate(errors) if e [1] 1.0] print(high_error_indices)
We want to find errors greater than 1.0, so the comparison operator should be >.
Fix the error in the code to compute residuals (difference between true and predicted values).
residuals = [true[i] [1] pred[i] for i in range(len(true))] print(residuals)
Residuals are calculated as true value minus predicted value, so the operator is -.
Fill both blanks to create a dictionary of samples with residuals greater than 0.5.
large_residuals = {i: residuals[i] for i in range(len(residuals)) if abs(residuals[i]) [1] 0.5 and residuals[i] [2] 0}
print(large_residuals)We want residuals with absolute value greater than 0.5 and positive residuals, so both comparisons use >.
Fill all three blanks to create a dictionary of samples where the residual is negative and its absolute value is greater than 0.7.
negative_large_residuals = {i: residuals[i] for i in range(len(residuals)) if residuals[i] [1] 0 and abs(residuals[i]) [2] 0.7 and residuals[i] [3] -1}
print(negative_large_residuals)Residuals less than 0 are negative (< 0), absolute value greater than 0.7 (> 0.7), and residuals not equal to -1 (!= -1) to exclude that specific value.