0
0
TensorFlowml~5 mins

Feature extraction approach in TensorFlow

Choose your learning style9 modes available
Introduction

Feature extraction helps us get important information from data so the computer can learn better and faster.

When you want to use a pre-trained model to get useful data features without training from scratch.
When your dataset is small and training a full model is hard.
When you want to speed up training by reusing learned features.
When you want to improve model accuracy by using strong, tested features.
When you want to understand which parts of data are important for predictions.
Syntax
TensorFlow
base_model = tf.keras.applications.MobileNetV2(input_shape=(224,224,3), include_top=False, weights='imagenet')
base_model.trainable = False

inputs = tf.keras.Input(shape=(224,224,3))
x = base_model(inputs, training=False)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs, outputs)

Set include_top=False to remove the final classification layer and get features.

Freeze the base model weights by setting trainable = False to avoid changing pre-trained features.

Examples
Using ResNet50 as the feature extractor without its top layer.
TensorFlow
base_model = tf.keras.applications.ResNet50(include_top=False, weights='imagenet')
base_model.trainable = False
Extract features and add new layers for a 10-class classification task.
TensorFlow
x = base_model(inputs, training=False)
x = tf.keras.layers.Flatten()(x)
outputs = tf.keras.layers.Dense(10, activation='softmax')(x)
Sample Model

This example shows how to use MobileNetV2 as a feature extractor. We freeze its weights, add new layers, train on dummy data, and make predictions.

TensorFlow
import tensorflow as tf

# Load pre-trained MobileNetV2 without top layer
base_model = tf.keras.applications.MobileNetV2(input_shape=(224,224,3), include_top=False, weights='imagenet')
base_model.trainable = False

# Build new model using feature extraction
inputs = tf.keras.Input(shape=(224,224,3))
x = base_model(inputs, training=False)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs, outputs)

# Compile model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Create dummy data (10 images, 224x224 RGB) and labels
import numpy as np
data = np.random.random((10,224,224,3)).astype('float32')
labels = np.random.randint(0,2,size=(10,1))

# Train model for 2 epochs
history = model.fit(data, labels, epochs=2, verbose=2)

# Predict on new dummy data
predictions = model.predict(data[:2])
print('Predictions:', predictions.flatten())
OutputSuccess
Important Notes

Feature extraction saves time by reusing learned knowledge from big datasets.

Always preprocess input images the same way the pre-trained model expects.

Freezing the base model prevents losing the useful features it learned.

Summary

Feature extraction uses pre-trained models to get useful data features.

It helps when you have little data or want faster training.

Freeze the base model and add your own layers for your task.