Complete the code to create a simple AdaBoost classifier using scikit-learn.
from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier model = AdaBoostClassifier(base_estimator=[1], n_estimators=50, random_state=42) model.fit(X_train, y_train)
AdaBoost commonly uses a simple decision tree with max_depth=1 as its base estimator, called a stump.
Complete the code to calculate the weighted error of a weak learner in boosting.
weighted_error = sum(sample_weights * (predictions != y_true)) / [1]
The weighted error divides the weighted sum of wrong predictions by the total sum of weights.
Fix the error in updating sample weights after a boosting iteration.
sample_weights = sample_weights * np.exp([1] * (predictions != y_true)) sample_weights /= sample_weights.sum()
Weights increase for misclassified samples, so the exponent must be negative alpha times the error indicator.
Fill both blanks to compute the alpha (learner weight) in AdaBoost.
alpha = 0.5 * np.log((1 - [1]) / [2])
Alpha is calculated as 0.5 * log((1 - error) / error) where error is the weighted error.
Fill all three blanks to create a dictionary comprehension that maps each weak learner to its alpha weight if alpha is positive.
learner_weights = [1]: [2] for learner, [3] in zip(learners, alphas) if [2] > 0}
The dictionary maps each learner to its alpha weight, filtering only positive alphas.