We use GPU or CPU to run calculations on data (tensors). Choosing where to put tensors helps make programs faster or simpler.
0
0
GPU vs CPU tensor placement in TensorFlow
Introduction
When training a deep learning model and you want it to run faster using a GPU.
When your computer does not have a GPU and you must use the CPU for calculations.
When you want to move data between CPU and GPU to optimize memory and speed.
When debugging and you want to check if tensors are on CPU or GPU.
When running small tasks that do not need GPU power, so you use CPU to save resources.
Syntax
TensorFlow
with tf.device('/CPU:0'): tensor_cpu = tf.constant([1.0, 2.0, 3.0]) with tf.device('/GPU:0'): tensor_gpu = tf.constant([1.0, 2.0, 3.0])
Use tf.device() to specify where tensors or operations run.
Device names like /CPU:0 and /GPU:0 tell TensorFlow to use CPU or GPU.
Examples
This creates a tensor on the CPU.
TensorFlow
with tf.device('/CPU:0'): a = tf.constant([1, 2, 3])
This creates a tensor on the first GPU if available.
TensorFlow
with tf.device('/GPU:0'): b = tf.constant([4, 5, 6])
This prints where each tensor is stored (CPU or GPU).
TensorFlow
print(a.device) print(b.device)
Sample Model
This program checks if a GPU is available. It creates one tensor on the CPU and one on the GPU if possible. Then it prints where each tensor is stored.
TensorFlow
import tensorflow as tf # Check if GPU is available if tf.config.list_physical_devices('GPU'): print('GPU is available') else: print('GPU is NOT available') # Create tensor on CPU with tf.device('/CPU:0'): tensor_cpu = tf.constant([10.0, 20.0, 30.0]) # Create tensor on GPU if available, else CPU device_name = '/GPU:0' if tf.config.list_physical_devices('GPU') else '/CPU:0' with tf.device(device_name): tensor_gpu = tf.constant([10.0, 20.0, 30.0]) print('Tensor on CPU device:', tensor_cpu.device) print('Tensor on GPU device:', tensor_gpu.device)
OutputSuccess
Important Notes
Not all computers have GPUs. TensorFlow will use CPU if GPU is not found.
GPU is faster for big calculations but uses more power.
You can move tensors between CPU and GPU but it takes time, so avoid moving data too often.
Summary
Use tf.device() to control if tensors run on CPU or GPU.
GPU speeds up big calculations, CPU is always available.
Check tensor device with tensor.device to debug or optimize.