The SimpleRNN layer helps a model remember information from earlier in a sequence. It is useful for tasks where order and time matter, like understanding sentences or time series.
SimpleRNN layer in TensorFlow
tf.keras.layers.SimpleRNN(units, activation='tanh', return_sequences=False, return_state=False, go_backwards=False, stateful=False, dropout=0.0, recurrent_dropout=0.0)
units is the number of memory cells in the layer.
return_sequences=True makes the layer output the full sequence, not just the last output.
SimpleRNN(32)SimpleRNN(64, activation='relu', return_sequences=True)
SimpleRNN(10, go_backwards=True)
This code creates random sequence data and passes it through a SimpleRNN layer with 4 units. It prints the input and output shapes and the actual output values.
import numpy as np import tensorflow as tf # Create sample data: batch of 2 sequences, each with 5 time steps and 3 features x = np.random.random((2, 5, 3)).astype(np.float32) # Build model with SimpleRNN layer model = tf.keras.Sequential([ tf.keras.layers.SimpleRNN(4, activation='tanh', return_sequences=False, input_shape=(5, 3)) ]) # Compile model model.compile(optimizer='adam', loss='mse') # Run a forward pass to get output output = model(x) print('Input shape:', x.shape) print('Output shape:', output.shape) print('Output values:', output.numpy())
SimpleRNN is good for learning basic sequence patterns but can struggle with long sequences due to forgetting.
For longer sequences, consider using LSTM or GRU layers which remember better.
Always set input_shape in the first layer to tell the model the shape of your data.
SimpleRNN layer processes sequences step-by-step, remembering past information.
It outputs either the last step or the full sequence depending on return_sequences.
Good for simple sequence tasks but limited for long-term memory.