Complete the code to create a DataLoader for the dataset with batch size 4.
from torch.utils.data import DataLoader dataloader = DataLoader(dataset, batch_size=[1])
The batch size controls how many samples are loaded per batch. Here, batch_size=4 means each batch has 4 samples.
Complete the code to shuffle the dataset in the DataLoader.
dataloader = DataLoader(dataset, batch_size=4, shuffle=[1])
Setting shuffle=True randomizes the order of samples each epoch, which helps training.
Fix the error in the code to correctly iterate over the DataLoader batches.
for [1] in dataloader: inputs, labels = batch print(inputs.shape, labels.shape)
Each iteration returns a batch, so the loop variable should be 'batch' to unpack inputs and labels.
Fill both blanks to create a DataLoader with batch size 8 and no shuffling.
dataloader = DataLoader(dataset, batch_size=[1], shuffle=[2])
Batch size 8 loads 8 samples per batch. shuffle=False means data order stays the same.
Fill all three blanks to create a DataLoader with batch size 16, shuffling enabled, and 2 worker threads.
dataloader = DataLoader(dataset, batch_size=[1], shuffle=[2], num_workers=[3])
Batch size 16 loads 16 samples per batch. shuffle=True randomizes data order. num_workers=2 uses two threads to load data faster.