Complete the code to add L2 regularization (weight decay) to the optimizer.
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, weight_decay=[1])
L2 regularization is added via the weight_decay parameter in PyTorch optimizers. A small positive value like 0.001 helps control overfitting by penalizing large weights.
Complete the code to apply dropout with 50% probability during training.
dropout_layer = torch.nn.Dropout(p=[1])Dropout randomly disables neurons during training to prevent overfitting. A dropout probability of 0.5 means half the neurons are dropped each time.
Fix the error in the training loop to correctly apply dropout only during training.
model.train() for data, target in train_loader: optimizer.zero_grad() output = model(data) loss = criterion(output, target) loss.backward() optimizer.step() model.[1]()
After training, calling model.eval() sets the model to evaluation mode, disabling dropout and batch norm effects for consistent predictions.
Fill both blanks to create a dictionary comprehension that stores squared values for even numbers only.
squares = {x: x[1]2 for x in range(1, 11) if x [2] 2 == 0}// instead of % for modulus.+ instead of exponentiation.The ** operator squares the number. The % operator checks if the number is even by testing remainder 0 when divided by 2.
Fill all three blanks to create a filtered dictionary with uppercase keys and values greater than 5.
filtered = [1]: [2] for k, v in data.items() if v [3] 5}
k.lower() instead of uppercase.< instead of > for filtering.Keys are converted to uppercase with k.upper(). Values are kept as v. The filter keeps items where value is greater than 5 using >.