0
0
ML Pythonprogramming~20 mins

Bias-variance tradeoff in ML Python - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Bias-Variance Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
Understanding Bias and Variance

Which statement best describes the relationship between bias and variance in a machine learning model?

ALow bias always leads to low variance in a model.
BHigh bias means the model fits the training data perfectly, while high variance means the model cannot fit the training data well.
CBias and variance both measure how well the model performs on the training data only.
DHigh bias means the model is too simple and underfits, while high variance means the model is too complex and overfits.
Attempts:
2 left
Predict Output
intermediate
2:00remaining
Effect of Model Complexity on Error

Consider the following Python code that simulates training and test errors for models of increasing complexity. What is the output of the print statement?

ML Python
import numpy as np
complexity = np.array([1, 2, 3, 4, 5])
train_error = 1 / complexity
test_error = 1 / complexity + 0.1 * complexity
print(f"Train error: {train_error}")
print(f"Test error: {test_error}")
ATrain error decreases as complexity increases; test error decreases then increases.
BBoth train and test errors increase as complexity increases.
CTrain error increases as complexity increases; test error decreases as complexity increases.
DTrain error stays constant; test error decreases as complexity increases.
Attempts:
2 left
Hyperparameter
advanced
1:30remaining
Choosing Regularization Strength

You train a linear regression model with L2 regularization (Ridge). Increasing the regularization parameter lambda will most likely:

ADecrease bias and increase variance, leading to more complex models.
BIncrease bias and decrease variance, leading to simpler models.
CIncrease both bias and variance, making the model unstable.
DHave no effect on bias or variance.
Attempts:
2 left
Metrics
advanced
1:30remaining
Interpreting Learning Curves

You observe the following behavior in learning curves: training error is high and stable, but test error is high and does not improve with more data. What does this indicate about the model?

AThe model has data leakage causing artificially low training error.
BThe model has high variance and is overfitting the training data.
CThe model has high bias and is underfitting the data.
DThe model is perfectly balanced between bias and variance.
Attempts:
2 left
🔧 Debug
expert
2:30remaining
Diagnosing Model Behavior from Code

Given the following code snippet training a decision tree, what is the most likely cause of the model having high training error and high test error?

from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier(max_depth=1)
model.fit(X_train, y_train)
train_acc = model.score(X_train, y_train)
test_acc = model.score(X_test, y_test)
print(f"Train accuracy: {train_acc}")
print(f"Test accuracy: {test_acc}")
AThe max_depth=1 is too low, causing high bias and underfitting.
BThe max_depth=1 is too high, causing high variance and overfitting.
CThe model is overfitting because max_depth is not set, so it grows fully.
DThe training data is too small, causing unstable accuracy scores.
Attempts:
2 left