0
0
TensorFlowml~10 mins

Optimizers (SGD, Adam, RMSprop) in TensorFlow - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to create a Stochastic Gradient Descent optimizer with a learning rate of 0.01.

TensorFlow
optimizer = tf.keras.optimizers.[1](learning_rate=0.01)
Drag options to blanks, or click blank then click option'
ASGD
BAdam
CRMSprop
DAdagrad
Attempts:
3 left
💡 Hint
Common Mistakes
Using Adam or RMSprop instead of SGD for this task.
Misspelling the optimizer name.
2fill in blank
medium

Complete the code to create an Adam optimizer with a learning rate of 0.001.

TensorFlow
optimizer = tf.keras.optimizers.[1](learning_rate=0.001)
Drag options to blanks, or click blank then click option'
ASGD
BRMSprop
CAdam
DAdadelta
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing SGD or RMSprop instead of Adam.
Confusing learning rate parameter names.
3fill in blank
hard

Fix the error in the code to create an RMSprop optimizer with a learning rate of 0.0005.

TensorFlow
optimizer = tf.keras.optimizers.[1](learning_rate=0.0005)
Drag options to blanks, or click blank then click option'
ASGD
BAdam
CAdagrad
DRMSprop
Attempts:
3 left
💡 Hint
Common Mistakes
Using Adam or SGD instead of RMSprop.
Incorrect capitalization of the optimizer name.
4fill in blank
hard

Fill both blanks to create an Adam optimizer with beta_1 set to 0.9 and beta_2 set to 0.999.

TensorFlow
optimizer = tf.keras.optimizers.Adam(beta_1=[1], beta_2=[2])
Drag options to blanks, or click blank then click option'
A0.9
B0.8
C0.999
D0.99
Attempts:
3 left
💡 Hint
Common Mistakes
Swapping beta_1 and beta_2 values.
Using values far from the typical defaults.
5fill in blank
hard

Fill all three blanks to create an RMSprop optimizer with learning rate 0.001, rho 0.9, and momentum 0.0.

TensorFlow
optimizer = tf.keras.optimizers.RMSprop(learning_rate=[1], rho=[2], momentum=[3])
Drag options to blanks, or click blank then click option'
A0.01
B0.001
C0.9
D0.0
Attempts:
3 left
💡 Hint
Common Mistakes
Using 0.01 instead of 0.001 for learning rate.
Setting momentum to a non-zero value when not needed.