0
0
PyTorchml~10 mins

Model optimization (quantization, pruning) in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to apply dynamic quantization to a PyTorch model.

PyTorch
import torch
import torch.nn as nn

model = nn.Linear(10, 5)
quantized_model = torch.quantization.[1](model, {nn.Linear})
Drag options to blanks, or click blank then click option'
Aquantize_static
Bquantize_dynamic
Cquantize_per_tensor
Dquantize_per_channel
Attempts:
3 left
💡 Hint
Common Mistakes
Using static quantization function instead of dynamic.
Passing wrong module types to the quantization function.
2fill in blank
medium

Complete the code to prune 20% of the weights in the first linear layer using L1 unstructured pruning.

PyTorch
import torch.nn.utils.prune as prune

prune.l1_unstructured(model.[1], name='weight', amount=0.2)
Drag options to blanks, or click blank then click option'
Abias
Blinear1
Clayer1
Dweight
Attempts:
3 left
💡 Hint
Common Mistakes
Trying to prune the 'bias' parameter instead of 'weight'.
Using the layer name instead of the parameter name.
3fill in blank
hard

Fix the error in the code to remove pruning reparameterization from the linear layer.

PyTorch
prune.[1](model.linear, 'weight')
Drag options to blanks, or click blank then click option'
Aremove
Bunprune
Cclear
Dreset
Attempts:
3 left
💡 Hint
Common Mistakes
Using non-existent functions like 'unprune' or 'clear'.
Trying to delete the layer instead of removing pruning.
4fill in blank
hard

Fill both blanks to apply global unstructured pruning to two layers with 30% sparsity.

PyTorch
parameters_to_prune = [(model.fc1, 'weight'), (model.fc2, 'weight')]
prune.[1](parameters_to_prune, pruning_method=prune.[2], amount=0.3)
Drag options to blanks, or click blank then click option'
Aglobal_unstructured
Bl1_unstructured
Crandom_unstructured
Dstructured
Attempts:
3 left
💡 Hint
Common Mistakes
Using structured pruning method with global pruning function.
Mixing pruning methods and functions incorrectly.
5fill in blank
hard

Fill all three blanks to create a quantized model, prepare it for static quantization, and convert it.

PyTorch
model.qconfig = torch.quantization.get_default_[1]_qconfig('fbgemm')
torch.quantization.[2](model, inplace=True)
quantized_model = torch.quantization.[3](model)
Drag options to blanks, or click blank then click option'
Aqconfig
Bprepare
Cconvert
Dcalibrate
Attempts:
3 left
💡 Hint
Common Mistakes
Confusing 'calibrate' with 'prepare' or 'convert'.
Skipping the prepare step before convert.