Complete the code to apply the ReLU activation function to the tensor.
import tensorflow as tf x = tf.constant([-3.0, 0.0, 5.0]) relu_output = tf.nn.[1](x) print(relu_output.numpy())
The ReLU activation function is applied using tf.nn.relu. It outputs zero for negative inputs and the input itself if positive.
Complete the code to apply the sigmoid activation function to the tensor.
import tensorflow as tf x = tf.constant([-1.0, 0.0, 1.0]) sigmoid_output = tf.nn.[1](x) print(sigmoid_output.numpy())
The sigmoid activation function is applied using tf.nn.sigmoid. It outputs values between 0 and 1, useful for probabilities.
Fix the error in the code to correctly apply the softmax activation function along the last axis.
import tensorflow as tf logits = tf.constant([[1.0, 2.0, 3.0]]) softmax_output = tf.nn.softmax(logits, axis=[1]) print(softmax_output.numpy())
The softmax function should be applied along the last axis of the logits tensor. Using axis=-1 ensures this regardless of tensor shape.
Fill both blanks to create a dictionary comprehension that maps each word to its sigmoid activation applied to its length.
words = ['hi', 'hello', 'hey'] import tensorflow as tf result = {word: tf.nn.[1](tf.constant(len(word), dtype=tf.float32)) for word in [2] print(result)
The sigmoid activation is applied to the length of each word. The dictionary comprehension iterates over the list words.
Fill all three blanks to create a dictionary comprehension that maps each word to its softmax probability over lengths greater than 2.
words = ['hi', 'hello', 'hey', 'greetings'] import tensorflow as tf lengths = [len(word) for word in words if len(word) [1] 2] lengths_tensor = tf.constant(lengths, dtype=tf.float32) softmax_vals = tf.nn.[2](lengths_tensor, axis=[3]) result = {word: softmax_vals[i].numpy() for i, word in enumerate(words) if len(word) > 2} print(result)
The dictionary comprehension maps words with length greater than 2 to their softmax probabilities. The softmax is applied along the last axis (-1).