Complete the code to import the function that calculates feature importance from a trained tree model.
from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier() model.fit(X_train, y_train) importance = model.[1]
The feature_importances_ attribute of a trained RandomForestClassifier gives the importance scores of each feature.
Complete the code to plot the feature importances using matplotlib.
import matplotlib.pyplot as plt features = ['age', 'income', 'score'] importances = model.feature_importances_ plt.bar(features, [1]) plt.show()
The bar heights should be the feature importance values stored in importances.
Fix the error in the code to get feature importance from a trained linear model.
from sklearn.linear_model import LogisticRegression model = LogisticRegression() model.fit(X_train, y_train) importance = model.[1]
feature_importances_ which is for tree models only.predict_proba which is a method, not importance.intercept_ which is the bias term, not feature importance.Linear models like LogisticRegression use coef_ to represent feature importance as coefficients.
Fill both blanks to create a dictionary mapping features to their importance scores, filtering only those with importance greater than 0.1.
important_features = {feature: importance for feature, importance in zip([1], [2]) if importance > 0.1}We zip feature names with their importance values and filter by importance > 0.1.
Fill all three blanks to compute and print the top 3 features by importance from a trained RandomForest model.
import numpy as np indices = np.argsort(model.[1])[::-1] top_features = [features[i] for i in indices[:[2]]] top_importances = [model.feature_importances_[i] for i in indices[:[3]]] for f, imp in zip(top_features, top_importances): print(f"Feature: {f}, Importance: {imp:.2f}")
coef_ which is not available for RandomForest.We sort the feature_importances_ descending, then select top 3 features and their importances to print.