0
0
Computer Visionml~12 mins

Privacy considerations in Computer Vision - Model Pipeline Trace

Choose your learning style9 modes available
Model Pipeline - Privacy considerations

This pipeline shows how a computer vision model processes images while respecting privacy. It includes steps to blur faces and remove sensitive info before training and prediction.

Data Flow - 5 Stages
1Data Collection
1000 images x 256x256 pixels x 3 channelsCollect raw images from cameras1000 images x 256x256 pixels x 3 channels
Image of a street with people and cars
2Privacy Filtering
1000 images x 256x256 pixels x 3 channelsDetect and blur faces to protect identity1000 images x 256x256 pixels x 3 channels
Same street image but faces blurred
3Feature Extraction
1000 images x 256x256 pixels x 3 channelsExtract features using CNN layers1000 samples x 128 features
[0.12, 0.45, ..., 0.33] feature vector for one image
4Model Training
800 samples x 128 featuresTrain model on training set with privacy-filtered dataTrained model
Model learns to classify objects without revealing identities
5Model Evaluation
200 samples x 128 featuresEvaluate model on test setAccuracy and loss metrics
Accuracy: 85%, Loss: 0.35
Training Trace - Epoch by Epoch
Loss
1.0 | *       
0.8 |  *      
0.6 |   *     
0.4 |    *    
0.2 |     *   
0.0 +---------
      1 2 3 4 5 Epochs
EpochLoss ↓Accuracy ↑Observation
10.850.60Model starts learning, loss high, accuracy low
20.650.72Loss decreases, accuracy improves
30.500.80Model learns important features while preserving privacy
40.400.83Training converges, privacy filters do not harm learning
50.350.85Final epoch with good accuracy and low loss
Prediction Trace - 4 Layers
Layer 1: Input Image
Layer 2: Feature Extraction (CNN)
Layer 3: Classification Layer
Layer 4: Prediction
Model Quiz - 3 Questions
Test your understanding
Why is face blurring applied before training?
ATo increase image size
BTo protect people's identity in images
CTo improve model accuracy
DTo add color to images
Key Insight
Applying privacy filters like face blurring before training helps protect sensitive information without significantly harming model performance. The model learns useful features from privacy-safe data, balancing accuracy and privacy.