0
0
PyTorchml~10 mins

Training and validation loss tracking in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to initialize the training loss list.

PyTorch
train_losses = [1]
Drag options to blanks, or click blank then click option'
A{}
B[]
C0
DNone
Attempts:
3 left
💡 Hint
Common Mistakes
Using a dictionary {} instead of a list.
Initializing with 0 or None which cannot store multiple values.
2fill in blank
medium

Complete the code to calculate the average training loss for an epoch.

PyTorch
epoch_train_loss = sum(train_losses) / [1]
Drag options to blanks, or click blank then click option'
Alen(train_losses)
Bmax(train_losses)
Cmin(train_losses)
Dtrain_losses
Attempts:
3 left
💡 Hint
Common Mistakes
Dividing by the maximum or minimum loss instead of the count.
Dividing by the list itself which causes an error.
3fill in blank
hard

Fix the error in the code to append validation loss after each epoch.

PyTorch
validation_losses.[1](val_loss)
Drag options to blanks, or click blank then click option'
Aadd
Binsert
Cappend
Dextend
Attempts:
3 left
💡 Hint
Common Mistakes
Using add which is not a list method.
Using extend which expects an iterable.
Using insert without specifying an index.
4fill in blank
hard

Fill both blanks to compute and store average training and validation losses.

PyTorch
avg_train_loss = sum(train_losses) [1] len(train_losses)
avg_val_loss = sum(validation_losses) [2] len(validation_losses)
Drag options to blanks, or click blank then click option'
A/
B*
C-
D+
Attempts:
3 left
💡 Hint
Common Mistakes
Using multiplication or addition instead of division.
Using subtraction which gives incorrect results.
5fill in blank
hard

Fill all three blanks to track losses during training and validation phases.

PyTorch
for epoch in range(num_epochs):
    model.train()
    train_loss = 0
    for data, target in train_loader:
        optimizer.zero_grad()
        output = model(data)
        loss = criterion(output, target)
        loss.[1]()
        optimizer.step()
        train_loss += loss.item()
    train_losses.[2](train_loss / len(train_loader))

    model.eval()
    val_loss = 0
    with torch.no_grad():
        for data, target in val_loader:
            output = model(data)
            loss = criterion(output, target)
            val_loss += loss.item()
    validation_losses.[3](val_loss / len(val_loader))
Drag options to blanks, or click blank then click option'
Abackward
Bappend
Dstep
Attempts:
3 left
💡 Hint
Common Mistakes
Using step instead of backward for gradients.
Using step or wrong methods to add losses to lists.