Experiment - Why DataLoader handles batching and shuffling
Problem:You have a dataset for training a neural network. Currently, you load data one sample at a time without shuffling or batching. This causes slow training and poor model generalization.
Current Metrics:Training loss decreases slowly and validation accuracy is low at 60%.
Issue:Data is not batched or shuffled, leading to inefficient training and overfitting on ordered data.