Complete the code to apply the ReLU activation function to the input tensor.
import torch input_tensor = torch.tensor([-1.0, 0.0, 2.0, -3.0]) output = torch.nn.functional.[1](input_tensor)
The ReLU function replaces negative values with zero, keeping positive values unchanged.
Complete the code to apply the sigmoid activation function to the input tensor.
import torch input_tensor = torch.tensor([-2.0, 0.0, 2.0]) output = torch.nn.functional.[1](input_tensor)
The sigmoid function outputs values between 0 and 1, useful for probabilities.
Fix the error in the code by choosing the correct activation function to apply softmax over dimension 1.
import torch input_tensor = torch.tensor([[1.0, 2.0, 3.0], [1.0, 2.0, 3.0]]) output = torch.nn.functional.[1](input_tensor, dim=1)
Softmax converts logits to probabilities across the specified dimension.
Fill both blanks to define a custom activation function that applies tanh and then scales the output by 2.
import torch input_tensor = torch.tensor([-1.0, 0.0, 1.0]) output = 2 * torch.nn.functional.[1](input_tensor) [2] 0
The tanh function outputs between -1 and 1. Adding 0 keeps the output unchanged after scaling.
Fill all three blanks to create a dictionary comprehension that maps each word to its sigmoid activation applied to its length.
import torch words = ['hi', 'hello', 'hey'] result = {word: torch.nn.functional.[1](torch.tensor(len(word))) for word in words if len(word) [2] 2 and len(word) [3] 5}
This comprehension applies sigmoid to the length of words longer than 2 and less than 5 characters.