0
0
TensorFlowml~5 mins

Why regularization prevents overfitting in TensorFlow - Quick Recap

Choose your learning style9 modes available
Recall & Review
beginner
What is overfitting in machine learning?
Overfitting happens when a model learns the training data too well, including noise and details that don't apply to new data. This makes the model perform poorly on unseen data.
Click to reveal answer
beginner
What does regularization do in a machine learning model?
Regularization adds a penalty to the model's complexity, encouraging it to keep weights small and simple. This helps the model focus on the main patterns and avoid fitting noise.
Click to reveal answer
intermediate
How does L2 regularization (weight decay) work?
L2 regularization adds the sum of squared weights to the loss function. This pushes the model to keep weights smaller, which reduces overfitting by making the model simpler.
Click to reveal answer
intermediate
Why does keeping model weights small help prevent overfitting?
Small weights mean the model changes less for small input changes, making it less sensitive to noise and more focused on general patterns.
Click to reveal answer
beginner
Name two common types of regularization used in TensorFlow.
Two common types are L1 regularization (which encourages sparsity) and L2 regularization (which encourages small weights). Both help reduce overfitting.
Click to reveal answer
What problem does regularization mainly help to solve?
AModel deployment
BUnderfitting
CData collection
DOverfitting
Which of these is a common regularization technique?
AL2 regularization
BBatch normalization
CDropout
DData augmentation
What effect does L2 regularization have on model weights?
AMakes weights larger
BRemoves weights
CMakes weights smaller
DDuplicates weights
Why is a simpler model less likely to overfit?
AIt ignores all data
BIt focuses on main patterns, not noise
CIt memorizes training data
DIt uses more layers
Which regularization method encourages sparsity in weights?
AL1 regularization
BL2 regularization
CDropout
DEarly stopping
Explain in your own words why regularization helps prevent overfitting in machine learning models.
Think about how adding a penalty to big weights changes the model's behavior.
You got /4 concepts.
    Describe the difference between L1 and L2 regularization and how each affects the model.
    Consider how each penalty changes the weights during training.
    You got /4 concepts.