0
0
MLOpsdevops~5 mins

Feature engineering pipelines in MLOps - Commands & Configuration

Choose your learning style9 modes available
Introduction
Feature engineering pipelines help automate the process of transforming raw data into useful features for machine learning models. They make sure the same steps are applied consistently during training and prediction, reducing errors and saving time.
When you want to clean and transform data before training a machine learning model.
When you need to apply the same data transformations to new data during model prediction.
When you want to organize multiple feature transformations into a single reusable workflow.
When you want to avoid repeating manual data processing steps and reduce mistakes.
When you want to track and reproduce feature transformations as part of your ML workflow.
Commands
Install scikit-learn library which provides tools to build feature engineering pipelines.
Terminal
pip install scikit-learn
Expected OutputExpected
Collecting scikit-learn Downloading scikit_learn-1.2.2-cp39-cp39-manylinux_2_17_x86_64.whl (23.3 MB) Installing collected packages: scikit-learn Successfully installed scikit-learn-1.2.2
Run the Python script that creates and applies a feature engineering pipeline to sample data.
Terminal
python feature_pipeline.py
Expected OutputExpected
Original data:\n age salary city\n0 25 50000 NY\n1 32 60000 SF\n2 40 80000 LA\n\nTransformed features:\n[[0. 0. 0. 0. 0. 1. ]\n [0.42857143 0.42857143 1. 1. 0. 0. ]\n [1. 1. 0. 0. 1. 0. ]]
Key Concept

If you remember nothing else from this pattern, remember: pipelines automate and standardize feature transformations to keep data consistent and reproducible.

Code Example
MLOps
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
import pandas as pd

# Sample raw data
raw_data = pd.DataFrame({
    'age': [25, 32, 40],
    'salary': [50000, 60000, 80000],
    'city': ['NY', 'SF', 'LA']
})

print("Original data:")
print(raw_data)

# Define which columns are numeric and which are categorical
numeric_features = ['age', 'salary']
categorical_features = ['city']

# Create transformers for numeric and categorical data
numeric_transformer = StandardScaler()
categorical_transformer = OneHotEncoder()

# Combine transformers into a preprocessor
preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformer, numeric_features),
        ('cat', categorical_transformer, categorical_features)
    ])

# Create a pipeline that applies the preprocessor
feature_pipeline = Pipeline(steps=[('preprocessor', preprocessor)])

# Fit the pipeline on raw data and transform it
transformed_features = feature_pipeline.fit_transform(raw_data)

print("\nTransformed features:")
print(transformed_features.toarray() if hasattr(transformed_features, 'toarray') else transformed_features)
OutputSuccess
Common Mistakes
Applying feature transformations separately during training and prediction.
This causes inconsistent data processing and can lead to poor model performance or errors.
Use a pipeline object that bundles all transformations and apply it both during training and prediction.
Not fitting the pipeline on training data before transforming new data.
Transformers like scalers need to learn parameters from training data; skipping fit causes errors or wrong results.
Always call fit or fit_transform on training data before transforming new data.
Summary
Install scikit-learn to access pipeline and transformer tools.
Create a pipeline combining numeric scaling and categorical encoding.
Fit the pipeline on training data and transform it to get consistent features.