0
0
ML Pythonml~20 mins

Why ensembles outperform single models in ML Python - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Ensemble Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why do ensemble models usually perform better than single models?

Imagine you ask several friends for advice instead of just one. Why might their combined advice be better? Similarly, why do ensemble models often outperform a single model?

ABecause ensembles ignore the predictions of weaker models.
BBecause ensembles always use deeper neural networks than single models.
CBecause ensembles train on more data than single models.
DBecause ensembles combine multiple models, reducing errors from individual models and improving overall accuracy.
Attempts:
2 left
💡 Hint

Think about how combining different opinions can reduce mistakes.

Metrics
intermediate
2:00remaining
Effect of ensemble on model variance

You have 5 models each with variance 0.04 and zero covariance between them. What is the variance of the average prediction from these 5 models?

A0.008
B0.04
C0.2
D0.0008
Attempts:
2 left
💡 Hint

Variance of average of independent variables is variance divided by number of variables.

Predict Output
advanced
2:00remaining
Output of ensemble prediction averaging

What is the output of the following Python code that averages predictions from three models?

ML Python
predictions = [[0.2, 0.8], [0.3, 0.7], [0.1, 0.9]]
avg_pred = [sum(x)/len(x) for x in zip(*predictions)]
print([round(p, 2) for p in avg_pred])
A[0.3, 0.7]
B[0.15, 0.85]
C[0.2, 0.8]
D[0.5, 0.5]
Attempts:
2 left
💡 Hint

Calculate average for each class probability across models.

Model Choice
advanced
2:00remaining
Choosing ensemble type for reducing bias and variance

You want to reduce both bias and variance in your model predictions. Which ensemble method is best suited for this?

ASimple averaging of identical models
BBagging (e.g., Random Forest)
CBoosting (e.g., Gradient Boosting Machines)
DUsing a single deep neural network
Attempts:
2 left
💡 Hint

Think about which method focuses on correcting errors and improving weak learners.

🔧 Debug
expert
3:00remaining
Why does this ensemble code produce wrong predictions?

Consider this code that tries to ensemble predictions by majority vote. What is the bug causing incorrect output?

ML Python
import numpy as np
preds = [[1,0,1],[0,1,1],[1,1,0]]
ensemble_pred = [np.argmax(np.bincount(preds)) for i in range(len(preds[0]))]
print(ensemble_pred)
Anp.bincount is called on the whole list instead of per position, causing wrong counts.
BThe list comprehension incorrectly uses i in range(len(preds[0])) causing index errors.
Cnp.bincount is called on a list of predictions per position correctly; no bug here.
Dnp.argmax is used incorrectly; it should be np.argmin to get majority vote.
Attempts:
2 left
💡 Hint

Check what data np.bincount receives inside the list comprehension.