Recall & Review
beginner
What is the main role of an optimizer in machine learning?
An optimizer helps the model learn by adjusting its internal settings (weights) to reduce errors and improve predictions.
Click to reveal answer
beginner
How does the SGD optimizer update model weights?
SGD (Stochastic Gradient Descent) updates weights by moving them a small step opposite to the error direction, using a fixed learning rate.
Click to reveal answer
intermediate
What makes Adam optimizer different from SGD?
Adam combines ideas from momentum and RMSprop, adapting learning rates for each weight individually, which helps faster and more stable learning.
Click to reveal answer
intermediate
Why is RMSprop useful for training neural networks?
RMSprop adjusts the learning rate for each weight based on recent gradients, helping the model learn well even when gradients vary a lot.
Click to reveal answer
beginner
Which optimizer would you choose for a simple linear model and why?
SGD is often chosen for simple models because it is straightforward and effective when the learning rate is well tuned.
Click to reveal answer
Which optimizer adapts the learning rate for each parameter individually?
✗ Incorrect
Adam adjusts learning rates for each parameter based on past gradients, unlike SGD which uses a fixed rate.
What does SGD stand for?
✗ Incorrect
SGD means Stochastic Gradient Descent, which updates weights using small random batches.
Which optimizer uses a moving average of squared gradients to adjust learning rates?
✗ Incorrect
RMSprop uses a moving average of squared gradients to adapt learning rates for each parameter.
Why might Adam be preferred over SGD?
✗ Incorrect
Adam adapts learning rates for each parameter and often converges faster than SGD.
Which optimizer is best described as 'simple and effective with a fixed learning rate'?
✗ Incorrect
SGD uses a fixed learning rate and is simple and effective for many problems.
Explain how the Adam optimizer works and why it might be better than SGD for some problems.
Think about how Adam changes learning rates for each weight and uses past gradients.
You got /4 concepts.
Describe the differences between SGD, RMSprop, and Adam optimizers in simple terms.
Focus on how each optimizer changes learning rates and uses past information.
You got /4 concepts.