What is the main reason PyTorch uses nn.Module to organize model code?
Think about what nn.Module helps manage inside a model.
nn.Module helps organize model layers and parameters, making it easy to move the model between devices and switch modes (training/evaluation).
What will be the output of the following code?
import torch import torch.nn as nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(10, 5) self.linear2 = nn.Linear(5, 2) def forward(self, x): x = self.linear1(x) return self.linear2(x) model = SimpleModel() print(sum(p.numel() for p in model.parameters()))
Count weights and biases for both layers: weights = input_dim * output_dim, biases = output_dim.
First layer: 10*5=50 weights + 5 biases = 55; second layer: 5*2=10 weights + 2 biases = 12; total = 67.
Which of the following is the best reason to use nn.Module when building a complex neural network?
Think about how complex models are built from smaller parts.
nn.Module supports nesting submodules and tracks all parameters, which is essential for complex models.
How does organizing model code with nn.Module affect training metrics like loss and accuracy?
Consider what nn.Module controls versus what training algorithms do.
nn.Module organizes code and parameters but does not change the learning algorithm or metrics directly.
What error will occur if you forget to inherit from nn.Module in a PyTorch model class?
import torch.nn as nn class MyModel: def __init__(self): self.linear = nn.Linear(10, 5) def forward(self, x): return self.linear(x) model = MyModel() print(list(model.parameters()))
Think about what parameters() method belongs to.
Without inheriting from nn.Module, the model class does not have the parameters() method, causing AttributeError.