Complete the code to create a simple RNN layer for time series data.
model = tf.keras.Sequential([
tf.keras.layers.SimpleRNN([1], input_shape=(10, 1))
])The number 32 specifies the number of units (neurons) in the SimpleRNN layer, which is required to define the layer size.
Complete the code to compile the RNN model with mean squared error loss.
model.compile(optimizer='adam', loss=[1], metrics=['mae'])
For time series regression, mean squared error ('mse') is the appropriate loss function.
Fix the error in the code to correctly reshape the input data for the RNN.
X_train = X_train.reshape((X_train.shape[0], [1], 1))
The second dimension should be the number of time steps, which is the original second dimension of X_train.
Fill both blanks to add a Dense output layer for univariate time series regression.
tf.keras.layers.Dense([1], activation=[2])
Dense layer with 1 unit and linear activation is standard for predicting a single continuous value in time series regression.
Fill all three blanks to train the RNN model with standard hyperparameters.
history = model.fit(X_train, y_train, epochs=[1], batch_size=[2], validation_split=[3])
Using 50 epochs, batch_size=32, and validation_split=0.2 are common settings for training RNN models on time series data.