Experiment - no_grad context manager
Problem:You have a PyTorch model that you want to use to make predictions on new data without updating the model weights. Currently, your code computes gradients during prediction, which wastes memory and slows down inference.
Current Metrics:During prediction, memory usage is high and inference is slow because gradients are computed unnecessarily.
Issue:The model is computing gradients during inference, causing inefficient memory use and slower prediction speed.