0
0
TensorFlowml~5 mins

Text generation with RNN in TensorFlow

Choose your learning style9 modes available
Introduction

Text generation with RNN helps computers write text by learning patterns from example sentences. It can create new sentences that sound like the original text.

To write simple stories or poems automatically.
To autocomplete sentences while typing messages.
To generate chatbot replies that sound natural.
To create new song lyrics based on existing ones.
To help with language learning by generating practice sentences.
Syntax
TensorFlow
model = tf.keras.Sequential([
    tf.keras.layers.Embedding(vocab_size, embedding_dim, batch_input_shape=[batch_size, None]),
    tf.keras.layers.SimpleRNN(rnn_units, return_sequences=True, stateful=True, recurrent_initializer='glorot_uniform'),
    tf.keras.layers.Dense(vocab_size)
])

The model uses an Embedding layer to turn words into numbers it can understand.

SimpleRNN layer processes sequences of words one by one, remembering past words.

Examples
This example creates a model for a vocabulary of 1000 words, embedding size 64, and 128 RNN units.
TensorFlow
model = tf.keras.Sequential([
    tf.keras.layers.Embedding(1000, 64),
    tf.keras.layers.SimpleRNN(128, return_sequences=True),
    tf.keras.layers.Dense(1000)
])
This example uses LSTM instead of SimpleRNN for better memory, with a larger vocabulary and embedding.
TensorFlow
model = tf.keras.Sequential([
    tf.keras.layers.Embedding(5000, 256),
    tf.keras.layers.LSTM(512, return_sequences=True),
    tf.keras.layers.Dense(5000)
])
Sample Model

This program trains a simple RNN on a tiny text sample to learn word sequences. Then it generates new words starting from a given phrase.

TensorFlow
import tensorflow as tf
import numpy as np

# Sample text data
text = "hello world hello tensorflow hello world"

# Create vocabulary
vocab = sorted(set(text.split()))
vocab_size = len(vocab)

# Map words to integers
word2idx = {u:i for i, u in enumerate(vocab)}
idx2word = np.array(vocab)

# Convert text to integers
text_as_int = np.array([word2idx[w] for w in text.split()])

# Prepare training sequences
seq_length = 3
examples_per_epoch = len(text_as_int) - seq_length

inputs = []
labels = []
for i in range(examples_per_epoch):
    inputs.append(text_as_int[i:i+seq_length])
    labels.append(text_as_int[i+1:i+seq_length+1])

inputs = np.array(inputs)
labels = np.array(labels)

# Build the model
embedding_dim = 8
rnn_units = 16

model = tf.keras.Sequential([
    tf.keras.layers.Embedding(vocab_size, embedding_dim, batch_input_shape=[None, seq_length]),
    tf.keras.layers.SimpleRNN(rnn_units, return_sequences=True),
    tf.keras.layers.Dense(vocab_size)
])

model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))

# Train the model
model.fit(inputs, labels, epochs=100, verbose=0)

# Generate text function
def generate_text(model, start_string, num_generate=5):
    input_eval = [word2idx[s] for s in start_string.split()]
    input_eval = tf.expand_dims(input_eval, 0)  # batch size 1

    text_generated = []

    for _ in range(num_generate):
        predictions = model(input_eval)
        predictions = tf.squeeze(predictions, 0)

        predicted_id = tf.random.categorical(predictions[-1:], num_samples=1)[-1,0].numpy()

        input_eval = tf.concat([input_eval[:,1:], [[predicted_id]]], axis=1)

        text_generated.append(idx2word[predicted_id])

    return ' '.join(text_generated)

# Generate new words
start = "hello world hello"
generated = generate_text(model, start)
print(f"Starting text: '{start}'")
print(f"Generated text: '{generated}'")
OutputSuccess
Important Notes

Training on very small text limits quality; bigger datasets improve results.

SimpleRNN is easy to understand but LSTM or GRU layers often give better text generation.

Setting return_sequences=True lets the model predict each word in the sequence.

Summary

RNNs can learn patterns in text to generate new sentences.

Embedding layers convert words into numbers the model can use.

Training requires sequences of words and their next words as labels.