0
0
TensorFlowml~20 mins

Text generation with RNN in TensorFlow - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
RNN Text Generation Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Model Choice
intermediate
1:30remaining
Choosing the right RNN layer for text generation

You want to build a text generation model using TensorFlow. Which RNN layer is best suited to capture long-range dependencies in text?

Atf.keras.layers.LSTM
Btf.keras.layers.SimpleRNN
Ctf.keras.layers.Dense
Dtf.keras.layers.Conv1D
Attempts:
2 left
💡 Hint

Think about which layer can remember information over many time steps.

Predict Output
intermediate
1:30remaining
Output shape of RNN layer in text generation model

Consider this TensorFlow code snippet for a text generation model:

model = tf.keras.Sequential([
  tf.keras.layers.Embedding(input_dim=1000, output_dim=64),
  tf.keras.layers.LSTM(128, return_sequences=True),
  tf.keras.layers.Dense(1000)
])

output_shape = model.output_shape

What is the value of output_shape?

A(None, None, 128)
B(None, 128)
C(None, 64)
D(None, None, 1000)
Attempts:
2 left
💡 Hint

Check the last Dense layer's output units and the return_sequences=True setting.

Hyperparameter
advanced
1:30remaining
Effect of sequence length on training RNN for text generation

When training an RNN for text generation, what is the main effect of increasing the input sequence length?

AIt reduces the model's ability to learn long-term dependencies.
BIt increases the computational cost and memory usage during training.
CIt decreases the vocabulary size the model can handle.
DIt makes the model ignore the order of words in the sequence.
Attempts:
2 left
💡 Hint

Think about how longer sequences affect the amount of data processed per training step.

Metrics
advanced
1:30remaining
Choosing the right metric for text generation model evaluation

Which metric is most appropriate to evaluate the quality of a text generation RNN model during training?

AF1 Score
BMean Squared Error (MSE)
CPerplexity
DAccuracy
Attempts:
2 left
💡 Hint

Consider a metric that measures how well the model predicts the next word probabilities.

🔧 Debug
expert
2:00remaining
Debugging vanishing gradient in RNN text generation model

You trained an RNN text generation model but it fails to learn long-term dependencies. Which of the following is the most likely cause?

AUsing SimpleRNN layers without gating mechanisms causing vanishing gradients
BSetting the learning rate too high causing unstable training
CUsing too large batch size causing overfitting
DUsing LSTM layers instead of SimpleRNN layers
Attempts:
2 left
💡 Hint

Think about which RNN type struggles with remembering information over many steps.