0
0
Computer Visionml~20 mins

Model optimization (pruning, quantization) in Computer Vision - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Model Optimization Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Model Pruning Effects

What is the primary effect of pruning a convolutional neural network model?

AIt increases the model size by adding redundant neurons to improve accuracy.
BIt converts the model weights from floating-point to integer format to save memory.
CIt reduces the number of parameters by removing less important weights, which can speed up inference.
DIt changes the model architecture by adding more convolutional layers.
Attempts:
2 left
💡 Hint

Think about how pruning affects model size and speed.

Predict Output
intermediate
2:00remaining
Output of Quantization on Model Weights

Given a floating-point weight tensor [0.9, -1.2, 0.3, 2.7], what is the output after 8-bit symmetric quantization with scale 0.1 and zero-point 0?

Computer Vision
import numpy as np
weights = np.array([0.9, -1.2, 0.3, 2.7])
scale = 0.1
zero_point = 0
quantized = np.round(weights / scale) + zero_point
quantized = quantized.astype(np.int8)
print(quantized.tolist())
A[90, -120, 30, 270]
B[9, -12, 3, 27]
C[8, -11, 2, 26]
D[10, -13, 4, 28]
Attempts:
2 left
💡 Hint

Divide each weight by the scale and round to nearest integer.

Hyperparameter
advanced
2:00remaining
Choosing Pruning Percentage

You want to prune a deep CNN model to reduce size but keep accuracy loss under 2%. Which pruning percentage is most reasonable to start with?

APrune 0% weights since pruning always reduces accuracy.
BPrune 50% of weights to balance size and accuracy.
CPrune 90% of weights to maximize compression.
DPrune 10% of weights to keep accuracy mostly intact.
Attempts:
2 left
💡 Hint

Start with a small pruning amount to avoid big accuracy drops.

Metrics
advanced
2:00remaining
Evaluating Quantized Model Accuracy

After quantizing a model, you observe the accuracy dropped from 92% to 89%. What metric best describes this change?

AAbsolute accuracy drop of 3 percentage points.
BRelative accuracy increase of 3%.
CAccuracy remains unchanged.
DAccuracy improved by 3 percentage points.
Attempts:
2 left
💡 Hint

Calculate the difference between original and quantized accuracy.

🔧 Debug
expert
2:00remaining
Debugging Pruning Code Causing Runtime Error

Consider this pruning code snippet for a PyTorch model:

for name, param in model.named_parameters():
    if 'weight' in name:
        mask = (param.abs() > threshold)
        param.data = param.data * mask

What is the cause of the runtime error?

AThe threshold variable is not defined before use.
BThe condition 'weight' in name is incorrect and never true.
CThe loop modifies param.data in-place which is not allowed in PyTorch.
DThe mask is a boolean tensor and multiplying it with param.data causes a type mismatch error.
Attempts:
2 left
💡 Hint

Look for undefined variables in the code.