We use time series evaluation metrics to check how well our model predicts future values based on past data. This helps us know if our model is good or needs improvement.
Time series evaluation metrics in ML Python
from sklearn.metrics import mean_absolute_error, mean_squared_error import numpy as np # y_true: actual values # y_pred: predicted values mae = mean_absolute_error(y_true, y_pred) mse = mean_squared_error(y_true, y_pred) rmse = np.sqrt(mse)
mean_absolute_error (MAE) measures average absolute difference between actual and predicted values.
mean_squared_error (MSE) squares the differences before averaging, penalizing bigger errors more.
root mean squared error (RMSE) is the square root of MSE, giving error in original units.
mae = mean_absolute_error([3, 5, 2], [2.5, 5, 2.1]) print(mae)
mse = mean_squared_error([3, 5, 2], [2.5, 5, 2.1]) print(mse)
rmse = np.sqrt(mean_squared_error([3, 5, 2], [2.5, 5, 2.1])) print(rmse)
This program compares actual and predicted time series values using MAE, MSE, and RMSE to evaluate prediction accuracy.
from sklearn.metrics import mean_absolute_error, mean_squared_error import numpy as np # Actual values of a time series actual = [100, 150, 200, 250, 300] # Predicted values from a model predicted = [110, 140, 210, 240, 310] # Calculate MAE mae = mean_absolute_error(actual, predicted) # Calculate MSE mse = mean_squared_error(actual, predicted) # Calculate RMSE rmse = np.sqrt(mse) print(f"MAE: {mae:.2f}") print(f"MSE: {mse:.2f}") print(f"RMSE: {rmse:.2f}")
Lower values of MAE, MSE, and RMSE mean better predictions.
RMSE is more sensitive to large errors than MAE.
Always compare metrics on the same dataset to judge model improvements.
Time series evaluation metrics help measure prediction errors.
MAE, MSE, and RMSE are common and easy to use.
Use these metrics to improve and trust your forecasting models.