Experiment - Num workers for parallel loading
Problem:You are training a PyTorch model on image data. The data loading is slow, causing the GPU to wait idle. Currently, the DataLoader uses num_workers=0, which means data loading is done in the main process.
Current Metrics:Training time per epoch: 120 seconds; GPU utilization: 40%; Validation accuracy: 85%
Issue:Data loading is a bottleneck, slowing training and reducing GPU usage efficiency.