Complete the code to create a simple feature pipeline step using scikit-learn.
from sklearn.preprocessing import [1] scaler = [1]()
The StandardScaler is a common feature scaling step in pipelines to standardize features.
Complete the code to combine two feature transformers into a pipeline.
from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA pipeline = Pipeline(steps=[('scaler', StandardScaler()), ('[1]', PCA(n_components=2))])
The name 'pca' is a common step name for the PCA transformer in pipelines.
Fix the error in the pipeline code by completing the missing method call.
from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA pipeline = Pipeline(steps=[('scaler', StandardScaler()), ('pca', PCA(n_components=2))]) X_transformed = pipeline.[1](X_train)
The fit_transform method fits the pipeline to the data and then transforms it in one step.
Fill both blanks to create a ColumnTransformer that applies scaling to numeric and encoding to categorical features.
from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler, OneHotEncoder preprocessor = ColumnTransformer(transformers=[('num', [1], numeric_features), ('cat', [2], categorical_features)])
Numeric features are scaled with StandardScaler, and categorical features are encoded with OneHotEncoder.
Fill all three blanks to build a pipeline that preprocesses data and fits a logistic regression model.
from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression pipeline = Pipeline(steps=[('preprocessor', [1]), ('classifier', LogisticRegression([2]=[3]))])
The pipeline uses a preprocessor step, and LogisticRegression is set with max_iter=100 to control iterations.