0
0
PyTorchml~20 mins

Model optimization (quantization, pruning) in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Model Optimization Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Quantization Impact on Model Size

Which of the following statements best describes the main effect of applying post-training quantization to a PyTorch model?

AIt increases the model size by adding extra layers to improve accuracy after training.
BIt reduces the model size by converting weights from 32-bit floats to lower-bit integers, often 8-bit, with minimal accuracy loss.
CIt removes neurons from the model to reduce size but always causes significant accuracy loss.
DIt converts the model to use 64-bit floating point weights for higher precision.
Attempts:
2 left
💡 Hint

Think about how quantization changes the data type of weights and its effect on storage.

Predict Output
intermediate
2:00remaining
Output of Pruning Code on Model Parameters

Consider the following PyTorch code that applies pruning to a linear layer. What will be the number of zero weights in model.fc.weight after pruning?

PyTorch
import torch
import torch.nn as nn
import torch.nn.utils.prune as prune

class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc = nn.Linear(10, 5)

model = SimpleModel()
prune.l1_unstructured(model.fc, name='weight', amount=0.4)

zero_weights = torch.sum(model.fc.weight == 0).item()
print(zero_weights)
A20
B40
C12
D0
Attempts:
2 left
💡 Hint

Calculate total weights and 40% of them.

Hyperparameter
advanced
2:00remaining
Choosing Pruning Amount for Accuracy Preservation

You want to prune a PyTorch model to reduce size but keep accuracy loss under 2%. Which pruning amount is most likely to meet this goal?

APruning 5% of weights using L1 unstructured pruning
BPruning 10% of weights using structured pruning on entire channels
CPruning 50% of weights randomly without considering importance
DPruning 80% of weights using unstructured pruning
Attempts:
2 left
💡 Hint

Smaller pruning amounts usually preserve accuracy better.

Metrics
advanced
2:00remaining
Evaluating Quantized Model Accuracy

After applying dynamic quantization to a PyTorch LSTM model, you observe the following accuracies on the test set:

  • Original model accuracy: 92%
  • Quantized model accuracy: 89%

What is the best interpretation of this result?

AThe quantized model should have the same accuracy; the difference means the test data changed.
BDynamic quantization always improves accuracy, so this result is unexpected and indicates a bug.
CQuantization caused the model to overfit, increasing accuracy on training but decreasing on test.
DDynamic quantization caused a small accuracy drop but improved inference speed and reduced model size.
Attempts:
2 left
💡 Hint

Quantization trades off some accuracy for efficiency.

🔧 Debug
expert
2:00remaining
Identifying Error in Pruning Application Code

What error will the following PyTorch code raise when trying to prune a model's convolutional layer?

PyTorch
import torch
import torch.nn as nn
import torch.nn.utils.prune as prune

class ConvModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv = nn.Conv2d(3, 16, 3, bias=False)

model = ConvModel()
prune.l1_unstructured(model.conv, name='bias', amount=0.3)
ANo error; pruning applied successfully
BAttributeError: 'Conv2d' object has no attribute 'bias_mask'
CValueError: Cannot prune parameter 'bias' because it does not exist or is not a tensor
DRuntimeError: Pruning amount must be between 0 and 1
Attempts:
2 left
💡 Hint

Check if the layer has a bias parameter by default.