0
0
ML Pythonml~20 mins

Stacking and blending in ML Python - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Stacking and Blending Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Difference between stacking and blending

Which statement correctly describes the main difference between stacking and blending in ensemble learning?

AStacking combines models by averaging predictions, blending uses majority voting.
BBlending requires no separate data for meta-model training, stacking does.
CBlending trains base models sequentially, stacking trains them in parallel.
DStacking uses cross-validation to train the meta-model, while blending uses a holdout validation set.
Attempts:
2 left
💡 Hint

Think about how the meta-model is trained in each method.

Predict Output
intermediate
2:00remaining
Output of stacking predictions code

What is the output of the following Python code snippet that performs stacking predictions?

ML Python
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

X, y = make_classification(n_samples=100, n_features=4, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Base models
model1 = LogisticRegression(random_state=42)
model2 = DecisionTreeClassifier(random_state=42)

# Train base models
model1.fit(X_train, y_train)
model2.fit(X_train, y_train)

# Generate base predictions for test set
preds1 = model1.predict_proba(X_test)[:, 1]
preds2 = model2.predict_proba(X_test)[:, 1]

# Stack predictions as features
stacked_features = np.column_stack((preds1, preds2))

# Meta-model
meta_model = LogisticRegression(random_state=42)
meta_model.fit(stacked_features, y_test)

# Final predictions
final_preds = meta_model.predict(stacked_features)

print(sum(final_preds))
A18
B20
C12
D16
Attempts:
2 left
💡 Hint

Count how many final predictions are positive (1) in the test set.

Hyperparameter
advanced
2:00remaining
Choosing meta-model for stacking

Which meta-model choice is generally best when stacking base models with diverse prediction scales and distributions?

AA linear regression model without regularization
BA logistic regression model with L2 regularization
CA decision tree with max_depth=1
DA k-nearest neighbors model with k=1
Attempts:
2 left
💡 Hint

Consider a model that handles different scales and avoids overfitting.

Metrics
advanced
2:00remaining
Evaluating blending ensemble performance

You trained a blending ensemble with three base classifiers and a meta-model on a holdout set. Which metric best reflects the meta-model's ability to improve over base models?

ATraining loss of base models
BMean squared error of base models on training data
CF1-score of the meta-model on the holdout set
DAccuracy of the meta-model on the holdout set
Attempts:
2 left
💡 Hint

Think about a metric that balances precision and recall for classification.

🔧 Debug
expert
3:00remaining
Debugging stacking code with data leakage

Consider this stacking code snippet. What is the main issue causing data leakage?

from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

X, y = make_classification(n_samples=200, n_features=5, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)

base_model = LogisticRegression(random_state=0)
base_model.fit(X_train, y_train)

# Using base model predictions on training data to train meta-model
train_preds = base_model.predict_proba(X_train)[:, 1]
meta_model = LogisticRegression(random_state=0)
meta_model.fit(train_preds.reshape(-1, 1), y_train)

# Predict on test data
test_preds = base_model.predict_proba(X_test)[:, 1]
final_preds = meta_model.predict(test_preds.reshape(-1, 1))

print(sum(final_preds))
AThe meta-model is trained on base model predictions from the same training data, causing overfitting.
BThe base model is not trained before generating predictions.
CThe test data is used to train the meta-model.
DThe base model predictions are not reshaped correctly before meta-model training.
Attempts:
2 left
💡 Hint

Think about how the meta-model training data is generated.