Recall & Review
beginner
What is the main purpose of regularization in machine learning?
Regularization helps prevent overfitting by adding a penalty to the model's complexity, encouraging simpler models that generalize better to new data.
Click to reveal answer
beginner
What is Ridge regularization also known as?
Ridge regularization is also called L2 regularization because it adds the squared values of the coefficients as a penalty term to the loss function.
Click to reveal answer
intermediate
How does Lasso regularization differ from Ridge regularization?
Lasso regularization (L1) adds the absolute values of the coefficients as a penalty, which can shrink some coefficients exactly to zero, effectively selecting features.
Click to reveal answer
intermediate
What effect does increasing the regularization parameter (alpha) have on a Ridge or Lasso model?
Increasing alpha increases the penalty on coefficients, which shrinks them more towards zero, reducing model complexity but possibly increasing bias.
Click to reveal answer
intermediate
Why might you choose Lasso over Ridge regularization?
Choose Lasso when you want automatic feature selection because it can zero out less important features, making the model simpler and easier to interpret.
Click to reveal answer
What type of penalty does Ridge regularization add to the loss function?
Which regularization method can shrink some coefficients exactly to zero?
What happens if you set the regularization parameter alpha to zero?
Which regularization method is better for reducing multicollinearity in features?
What is a common effect of too much regularization?
Explain in your own words how Ridge and Lasso regularization help improve a machine learning model.
Describe a situation where you would prefer Lasso regularization over Ridge.