Which statement best describes the main difference in how TensorFlow and PyTorch execute operations?
Think about when the computation graph is built and executed in each framework.
TensorFlow builds a static graph before running computations, while PyTorch builds the graph dynamically during execution.
What is the shape of the tensor y after running this PyTorch code?
import torch x = torch.randn(3, 4) y = x.view(-1, 2) print(y.shape)
Remember that view(-1, 2) reshapes the tensor to have 2 columns and infers the number of rows.
The original tensor has 3*4=12 elements. Reshaping to (-1, 2) means 12/2=6 rows and 2 columns.
You are a researcher who needs to frequently change and debug your neural network models. Which framework is generally better suited for this purpose?
Consider which framework builds the computation graph on the fly.
PyTorch's dynamic graph lets you change the model structure easily and debug line-by-line, which is helpful in research.
When switching between training and evaluation modes, which statement correctly describes the default behavior difference between TensorFlow and PyTorch?
Think about how each framework handles layers like dropout during training and evaluation.
PyTorch requires explicit calls to model.train() or model.eval() to switch modes. TensorFlow manages this internally during fit and predict calls.
You train a classification model in both TensorFlow and PyTorch using the same dataset and architecture. After training, you want to compare accuracy metrics. Which statement is true about how accuracy is typically computed and reported in these frameworks?
Consider how metrics are implemented and used in each framework's training loops.
TensorFlow's Keras API provides built-in accuracy metrics that handle averaging internally. PyTorch users often compute accuracy manually across batches.