Challenge - 5 Problems
Device Mastery in TensorFlow
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Tensor device placement output
What device will the tensor be placed on and what will be the output of the following code snippet?
TensorFlow
import tensorflow as tf with tf.device('/CPU:0'): a = tf.constant([1.0, 2.0, 3.0]) with tf.device('/GPU:0'): b = tf.constant([4.0, 5.0, 6.0]) print(a.device) print(b.device)
Attempts:
2 left
💡 Hint
Tensors are placed on the device specified by the tf.device context manager.
✗ Incorrect
The tensor 'a' is created inside the CPU device context, so it is placed on CPU. The tensor 'b' is created inside the GPU device context, so it is placed on GPU. The device strings printed reflect this placement.
🧠 Conceptual
intermediate2:00remaining
Tensor operations device fallback
If you create a tensor on GPU but perform an operation with a tensor on CPU, where will TensorFlow perform the operation?
Attempts:
2 left
💡 Hint
TensorFlow tries to perform operations on the same device by copying tensors if needed.
✗ Incorrect
TensorFlow automatically copies tensors to the same device before performing operations. So if tensors are on different devices, it copies one to the other's device and then performs the operation.
❓ Metrics
advanced2:00remaining
Comparing training speed on CPU vs GPU
You train the same neural network model on CPU and GPU. Which metric difference best indicates GPU acceleration?
Attempts:
2 left
💡 Hint
GPU is designed to speed up computations, so training time is key.
✗ Incorrect
GPU acceleration typically reduces training time per epoch compared to CPU. Loss or accuracy differences are not direct indicators of device speed.
🔧 Debug
advanced2:00remaining
TensorFlow device placement error
What error will this code raise and why?
import tensorflow as tf
with tf.device('/GPU:1'):
x = tf.constant([1, 2, 3])
Attempts:
2 left
💡 Hint
Check if the specified GPU device exists on your machine.
✗ Incorrect
If the machine has only one GPU (usually '/GPU:0'), specifying '/GPU:1' causes a runtime error because that device does not exist.
❓ Model Choice
expert3:00remaining
Choosing device placement for large model training
You have a large deep learning model that does not fit into GPU memory. Which strategy is best to train it efficiently?
Attempts:
2 left
💡 Hint
Splitting model across GPUs helps handle large models exceeding single GPU memory.
✗ Incorrect
Model parallelism splits the model across multiple GPUs, allowing training of large models that don't fit on one GPU. Placing on CPU is slower, reducing batch size may not be enough, and float16 on CPU doesn't solve memory or speed issues.