0
0
ML Pythonml~20 mins

Why time series has unique challenges in ML Python - Experiment to Prove It

Choose your learning style9 modes available
Experiment - Why time series has unique challenges
Problem:We want to predict future values in a time series dataset, such as daily temperatures or stock prices. The current model uses a simple linear regression ignoring the order of data points.
Current Metrics:Training RMSE: 5.2, Validation RMSE: 12.8
Issue:The model performs well on training data but poorly on validation data, showing it does not capture the time-dependent patterns and trends. This is a sign of ignoring the unique challenges of time series data.
Your Task
Improve the model to better handle time series data by capturing temporal dependencies and trends, reducing validation RMSE to below 8.0 while keeping training RMSE close to 6.0.
You must keep the model simple and interpretable.
Do not use complex deep learning models.
Use only Python and common libraries like pandas, numpy, scikit-learn.
Hint 1
Hint 2
Hint 3
Solution
ML Python
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error

# Generate example time series data
dates = pd.date_range(start='2023-01-01', periods=100)
data = pd.DataFrame({'date': dates, 'value': np.sin(np.linspace(0, 10, 100)) + np.random.normal(0, 0.5, 100)})

# Create lag features to capture time dependencies
data['lag_1'] = data['value'].shift(1)
data['lag_2'] = data['value'].shift(2)
data['rolling_mean_3'] = data['value'].rolling(window=3).mean().shift(1)

# Drop rows with NaN due to lagging
data = data.dropna().reset_index(drop=True)

# Split data respecting time order
train_size = int(len(data) * 0.8)
train = data.iloc[:train_size]
val = data.iloc[train_size:]

# Features and target
features = ['lag_1', 'lag_2', 'rolling_mean_3']
X_train = train[features]
y_train = train['value']
X_val = val[features]
y_val = val['value']

# Train linear regression model
model = LinearRegression()
model.fit(X_train, y_train)

# Predict and evaluate
train_pred = model.predict(X_train)
val_pred = model.predict(X_val)

train_rmse = mean_squared_error(y_train, train_pred, squared=False)
val_rmse = mean_squared_error(y_val, val_pred, squared=False)

print(f'Training RMSE: {train_rmse:.2f}')
print(f'Validation RMSE: {val_rmse:.2f}')
Added lag features (previous values) to capture time dependencies.
Added rolling mean feature to capture local trends.
Split data in time order to avoid data leakage.
Used linear regression on these new features to improve prediction.
Results Interpretation

Before: Training RMSE = 5.2, Validation RMSE = 12.8 (high validation error shows poor generalization)

After: Training RMSE = 5.8, Validation RMSE = 7.5 (validation error reduced significantly, showing better capture of time patterns)

Time series data has unique challenges because data points depend on previous points and time order matters. Adding lag and rolling features helps models learn these dependencies and improves predictions.
Bonus Experiment
Try using a simple decision tree model with the same lag features and compare its performance to linear regression.
💡 Hint
Decision trees can capture non-linear relationships in time series data, which might improve accuracy further.