Complete the code to set the model to evaluation mode before validation.
model.[1]()Calling model.eval() sets the model to evaluation mode, which disables dropout and batch normalization updates during validation.
Complete the code to disable gradient calculation during validation.
with torch.[1](): # validation code here
Using torch.no_grad() disables gradient calculation, which saves memory and computations during validation.
Fix the error in the validation loop to correctly accumulate the total loss.
total_loss = 0 for inputs, labels in val_loader: outputs = model(inputs) loss = criterion(outputs, labels) total_loss [1] loss.item()
Use total_loss += loss.item() to add each batch's loss to the total loss.
Fill both blanks to compute the average validation loss after the loop.
avg_loss = total_loss [1] len([2])
Divide the total loss by the number of batches in val_loader to get the average loss.
Fill all three blanks to complete the validation loop with accuracy calculation.
correct = 0 total = 0 with torch.no_grad(): for inputs, labels in [1]: outputs = model(inputs) _, predicted = torch.max(outputs.data, [2]) total += labels.size([3]) correct += (predicted == labels).sum().item()
Use val_loader to loop over validation data, dim=1 to get max over class scores, and 0 to get batch size dimension.