0
0
PyTorchml~20 mins

Custom transforms in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Custom Transform Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of a custom PyTorch transform on a tensor
Given the following custom transform class and input tensor, what is the output tensor after applying the transform?
PyTorch
import torch
class AddScalar:
    def __init__(self, scalar):
        self.scalar = scalar
    def __call__(self, x):
        return x + self.scalar

transform = AddScalar(3)
tensor = torch.tensor([1, 2, 3])
output = transform(tensor)
print(output)
Atensor([4, 5, 6])
Btensor([3, 3, 3])
Ctensor([1, 2, 3])
Dtensor([6, 7, 8])
Attempts:
2 left
💡 Hint
Remember the transform adds the scalar 3 to each element of the tensor.
Model Choice
intermediate
2:00remaining
Choosing the correct custom transform for image normalization
You want to create a custom PyTorch transform that normalizes an image tensor by subtracting the mean and dividing by the standard deviation. Which class correctly implements this?
A
class Normalize:
    def __init__(self, mean, std):
        self.mean = mean
        self.std = std
    def __call__(self, x):
        return (x - self.mean) / self.std
B
class Normalize:
    def __init__(self, mean, std):
        self.mean = mean
        self.std = std
    def __call__(self, x):
        return x + self.mean - self.std
C
class Normalize:
    def __init__(self, mean, std):
        self.mean = mean
        self.std = std
    def __call__(self, x):
        return x * self.std + self.mean
D
class Normalize:
    def __init__(self, mean, std):
        self.mean = mean
        self.std = std
    def __call__(self, x):
        return (x + self.mean) / self.std
Attempts:
2 left
💡 Hint
Normalization means subtracting mean and dividing by std deviation.
Hyperparameter
advanced
2:00remaining
Effect of changing scalar in a custom transform
You have a custom transform that multiplies input tensors by a scalar value. If you increase the scalar from 2 to 5, what is the expected effect on the model training when using this transform on input data?
AThe input values become negative, causing the model to fail training.
BThe input values become smaller, which slows down training and reduces model accuracy.
CThe input values remain unchanged, so training is unaffected.
DThe input values become larger, which may cause the model to learn faster but risk numerical instability.
Attempts:
2 left
💡 Hint
Multiplying inputs by a larger scalar increases their magnitude.
🔧 Debug
advanced
2:00remaining
Identify the error in this custom transform code
What error will this custom transform code raise when applied to a tensor?
PyTorch
import torch
class Multiply:
    def __init__(self, factor):
        self.factor = factor
    def __call__(self, x):
        return x * self.factor

transform = Multiply(4)
tensor = torch.tensor([1, 2, 3])
output = transform(tensor)
print(output)
ANo error, output is tensor([4, 8, 12])
BNameError: name 'factor' is not defined
CAttributeError: 'Multiply' object has no attribute 'factor'
DTypeError: unsupported operand type(s) for *: 'int' and 'Multiply'
Attempts:
2 left
💡 Hint
Check variable usage inside the __call__ method.
🧠 Conceptual
expert
2:00remaining
Why use custom transforms instead of built-in ones?
Which of the following is the best reason to create a custom transform in PyTorch instead of using built-in transforms?
ATo avoid using PyTorch's DataLoader class.
BBecause built-in transforms are slower and less efficient.
CTo implement a unique data preprocessing step not available in built-in transforms.
DBecause custom transforms automatically improve model accuracy.
Attempts:
2 left
💡 Hint
Think about when you need something special that built-in tools don't offer.