Complete the code to create a tensor on the GPU device.
import tensorflow as tf with tf.device('[1]'): a = tf.constant([1.0, 2.0, 3.0])
The code places the tensor a on the first GPU device using tf.device('/GPU:0').
Complete the code to check if a tensor is placed on GPU.
import tensorflow as tf x = tf.constant([1, 2, 3]) print(x.device.endswith('[1]'))
The x.device string ends with the device name where the tensor is placed. Checking if it ends with 'GPU:0' confirms if it's on the first GPU.
Fix the error in the code to place a tensor on CPU explicitly.
import tensorflow as tf with tf.device('[1]'): b = tf.constant([4, 5, 6])
To place a tensor explicitly on the CPU, use tf.device('/CPU:0').
Fill both blanks to create a tensor on GPU and then move it to CPU.
import tensorflow as tf with tf.device('[1]'): c = tf.constant([7, 8, 9]) with tf.device('[2]'): d = tf.identity(c)
The tensor c is created on GPU using /GPU:0. Then d is created as a copy on CPU using /CPU:0.
Fill all three blanks to create a tensor on CPU, then move it to GPU, and finally check its device.
import tensorflow as tf with tf.device('[1]'): e = tf.constant([10, 11, 12]) with tf.device('[2]'): f = tf.identity(e) print(f.device.endswith('[3]'))
The tensor e is created on CPU (/CPU:0), then copied to GPU (/GPU:0) as f. The print statement checks if f is on GPU by checking if its device string ends with 'GPU:0'.