Complete the code to apply the ReLU activation function to the tensor.
import torch import torch.nn.functional as F x = torch.tensor([-1.0, 0.0, 1.0, 2.0]) output = F.[1](x) print(output)
The ReLU function replaces negative values with zero and keeps positive values unchanged. Using F.relu applies this activation.
Complete the code to apply the Sigmoid activation function to the tensor.
import torch import torch.nn.functional as F x = torch.tensor([-2.0, 0.0, 2.0]) output = F.[1](x) print(output)
The Sigmoid function outputs values between 0 and 1, useful for probabilities. Using F.sigmoid applies this activation.
Fix the error in applying Softmax activation along the correct dimension.
import torch import torch.nn.functional as F x = torch.tensor([[1.0, 2.0, 3.0], [1.0, 2.0, 3.0]]) output = F.softmax(x, dim=[1]) print(output)
Softmax is usually applied along the feature dimension (dim=1) to convert scores into probabilities per row.
Fill both blanks to create a tensor and apply ReLU activation.
import torch import torch.nn.functional as F x = torch.tensor([1]) output = F.[2](x) print(output)
The tensor contains positive and negative values. Applying relu sets negatives to zero and keeps positives.
Fill all three blanks to create a tensor, apply Softmax along the correct dimension, and print the result.
import torch import torch.nn.functional as F x = torch.tensor([1]) output = F.[2](x, dim=[3]) print(output)
The tensor has two rows of scores. Applying softmax along dim=1 converts each row into probabilities.