0
0
ML Pythonml~10 mins

Feature importance explanation in ML Python - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to import the function that calculates feature importance from a trained tree model.

ML Python
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
model.fit(X_train, y_train)
importance = model.[1]
Drag options to blanks, or click blank then click option'
Afit
Bpredict
Cscore
Dfeature_importances_
Attempts:
3 left
💡 Hint
Common Mistakes
Using the method 'predict' instead of the attribute for importance.
Trying to call 'fit' again instead of accessing importance.
Using 'score' which returns accuracy, not feature importance.
2fill in blank
medium

Complete the code to plot the feature importances using matplotlib.

ML Python
import matplotlib.pyplot as plt
features = ['age', 'income', 'score']
importances = model.feature_importances_
plt.bar(features, [1])
plt.show()
Drag options to blanks, or click blank then click option'
Aimportances
Brange(len(features))
Cmodel
Dfeatures
Attempts:
3 left
💡 Hint
Common Mistakes
Using feature names as bar heights instead of importance values.
Passing the model object instead of importance values.
Using range indices instead of actual importance values.
3fill in blank
hard

Fix the error in the code to get feature importance from a trained linear model.

ML Python
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
importance = model.[1]
Drag options to blanks, or click blank then click option'
Aintercept_
Bcoef_
Cpredict_proba
Dfeature_importances_
Attempts:
3 left
💡 Hint
Common Mistakes
Trying to use feature_importances_ which is for tree models only.
Using predict_proba which is a method, not importance.
Using intercept_ which is the bias term, not feature importance.
4fill in blank
hard

Fill both blanks to create a dictionary mapping features to their importance scores, filtering only those with importance greater than 0.1.

ML Python
important_features = {feature: importance for feature, importance in zip([1], [2]) if importance > 0.1}
Drag options to blanks, or click blank then click option'
Afeatures
Bimportances
Cmodel
DX_train.columns
Attempts:
3 left
💡 Hint
Common Mistakes
Using the model object instead of feature names or importance values.
Swapping the order of features and importances in zip.
Using dataset columns without matching importance values.
5fill in blank
hard

Fill all three blanks to compute and print the top 3 features by importance from a trained RandomForest model.

ML Python
import numpy as np
indices = np.argsort(model.[1])[::-1]
top_features = [features[i] for i in indices[:[2]]]
top_importances = [model.feature_importances_[i] for i in indices[:[3]]]
for f, imp in zip(top_features, top_importances):
    print(f"Feature: {f}, Importance: {imp:.2f}")
Drag options to blanks, or click blank then click option'
Afeature_importances_
B3
Dcoef_
Attempts:
3 left
💡 Hint
Common Mistakes
Using coef_ which is not available for RandomForest.
Using different numbers for second and third blanks causing mismatch.
Not reversing the sorted indices to get descending order.