Which of the following best explains why stationarity is important when analyzing time series data?
Think about why consistent behavior in data helps in making predictions.
Stationarity means the data's mean, variance, and other properties stay constant over time. This consistency helps models learn patterns that hold in the future.
Given the time series data [3, 5, 8, 12, 18], what is the result of applying first-order differencing?
data = [3, 5, 8, 12, 18] differenced = [data[i] - data[i-1] for i in range(1, len(data))] print(differenced)
Subtract each value from the previous one.
First-order differencing subtracts each value by the one before it: 5-3=2, 8-5=3, 12-8=4, 18-12=6.
You have a time series with a clear upward trend and non-constant variance. Which model approach is best suited before forecasting?
Think about how to handle trends and changing variance before modeling.
ARIMA models require stationarity. Differencing removes trends and stabilizes variance, making ARIMA suitable.
Which metric or test is commonly used to check if a time series is stationary?
Look for a test that checks for unit roots in time series.
The ADF test checks if a time series has a unit root, indicating non-stationarity. A low p-value suggests stationarity.
What error will this code produce when trying to difference a time series?
data = [10, 15, 20] diff = [data[i] - data[i+1] for i in range(len(data)-1)] print(diff)
Check the order of subtraction and the indices used.
The code subtracts the next value from the current, which reverses the differencing direction. It does not cause an index error because the range is correct, but the values are wrong.