0
0
Computer Visionml~12 mins

Fine-tuning approach in Computer Vision - Model Pipeline Trace

Choose your learning style9 modes available
Model Pipeline - Fine-tuning approach

This pipeline shows how a pre-trained computer vision model is adapted to a new task by fine-tuning. It starts with input images, processes them through a pre-trained model, then retrains some layers on new data to improve accuracy for the new task.

Data Flow - 5 Stages
1Input images
1000 images x 224 x 224 x 3Raw images loaded and resized to 224x224 pixels with 3 color channels (RGB)1000 images x 224 x 224 x 3
Image of a cat resized to 224x224 pixels
2Preprocessing
1000 images x 224 x 224 x 3Normalize pixel values to range 0-11000 images x 224 x 224 x 3
Pixel values scaled from 0-255 to 0-1
3Feature extraction with pre-trained model
1000 images x 224 x 224 x 3Pass images through pre-trained convolutional layers (frozen weights)1000 images x 7 x 7 x 512
Feature map representing edges and textures
4Fine-tuning layers
1000 images x 7 x 7 x 512Unfreeze last convolutional block and retrain on new data1000 images x 7 x 7 x 512 (updated weights)
Model adjusts filters to better detect new task features
5Classification head
1000 images x 7 x 7 x 512Flatten features and pass through dense layers to output class probabilities1000 images x 10 classes
Output probabilities for 10 object categories
Training Trace - Epoch by Epoch

Epoch 1: ************ (loss=1.2)
Epoch 2: *********    (loss=0.9)
Epoch 3: *******      (loss=0.7)
Epoch 4: ******       (loss=0.6)
Epoch 5: *****        (loss=0.55)
EpochLoss ↓Accuracy ↑Observation
11.20.55Initial fine-tuning starts with moderate accuracy and high loss
20.90.68Loss decreases and accuracy improves as model adapts
30.70.75Continued improvement shows effective fine-tuning
40.60.80Model learns task-specific features better
50.550.83Training converges with good accuracy and low loss
Prediction Trace - 5 Layers
Layer 1: Input image
Layer 2: Pre-trained convolutional layers
Layer 3: Fine-tuned convolutional block
Layer 4: Flatten and dense layers
Layer 5: Final prediction
Model Quiz - 3 Questions
Test your understanding
Why do we freeze most layers of the pre-trained model during fine-tuning?
ATo speed up training by skipping all layers
BTo keep learned general features and only adapt specific layers
CBecause frozen layers improve accuracy automatically
DTo prevent the model from making any changes
Key Insight
Fine-tuning leverages existing knowledge from a pre-trained model and adapts it to a new task by retraining only some layers. This approach saves time and data while improving accuracy for the new problem.