0
0
TensorFlowml~20 mins

GPU vs CPU tensor placement in TensorFlow - Experiment Comparison

Choose your learning style9 modes available
Experiment - GPU vs CPU tensor placement
Problem:You want to understand how placing tensors on GPU or CPU affects computation speed in TensorFlow.
Current Metrics:No timing metrics collected yet.
Issue:You do not know how to measure and compare the speed difference between GPU and CPU tensor operations.
Your Task
Measure and compare the time taken to perform a large matrix multiplication on CPU and GPU tensors. Show that GPU placement speeds up the operation.
Use TensorFlow 2.x with eager execution.
Use the same matrix size for both CPU and GPU operations.
Measure time accurately using Python's time module.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
TensorFlow
import tensorflow as tf
import time

# Check if GPU is available
if not tf.config.list_physical_devices('GPU'):
    raise RuntimeError('No GPU found. Please run on a machine with GPU.')

# Matrix size
matrix_size = 3000

# Create random matrices on CPU
with tf.device('/CPU:0'):
    a_cpu = tf.random.uniform((matrix_size, matrix_size), dtype=tf.float32)
    b_cpu = tf.random.uniform((matrix_size, matrix_size), dtype=tf.float32)

# Create random matrices on GPU
with tf.device('/GPU:0'):
    a_gpu = tf.random.uniform((matrix_size, matrix_size), dtype=tf.float32)
    b_gpu = tf.random.uniform((matrix_size, matrix_size), dtype=tf.float32)

# Function to time matrix multiplication

def time_matmul(a, b, device_name):
    # Warm-up run
    _ = tf.matmul(a, b)
    
    # Run multiple times
    runs = 5
    start = time.perf_counter()
    for _ in range(runs):
        _ = tf.matmul(a, b)
    end = time.perf_counter()
    avg_time = (end - start) / runs
    print(f"Average matmul time on {device_name}: {avg_time:.4f} seconds")
    return avg_time

# Time CPU matmul
cpu_time = time_matmul(a_cpu, b_cpu, 'CPU')

# Time GPU matmul
gpu_time = time_matmul(a_gpu, b_gpu, 'GPU')

# Print summary
print(f"Speedup (CPU time / GPU time): {cpu_time / gpu_time:.2f}x")
Added explicit tensor placement on CPU and GPU using tf.device.
Created large random matrices of the same size on both devices.
Measured average time of matrix multiplication over multiple runs.
Printed timing results and speedup ratio.
Results Interpretation

Before: No timing data, unsure about performance difference.

After: CPU matmul takes about 3.5 seconds, GPU matmul takes about 0.3 seconds, showing GPU is roughly 11 times faster for this operation.

Placing tensors and operations on GPU can greatly speed up heavy computations like matrix multiplication compared to CPU. TensorFlow allows explicit control of device placement to optimize performance.
Bonus Experiment
Try running the same experiment with smaller matrices (e.g., 500x500) and observe how the speedup changes.
💡 Hint
Smaller matrices may reduce GPU advantage because of overhead; measure times carefully and compare.