0
0
TensorFlowml~20 mins

GPU vs CPU tensor placement in TensorFlow - Practice Questions

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Device Mastery in TensorFlow
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Tensor device placement output
What device will the tensor be placed on and what will be the output of the following code snippet?
TensorFlow
import tensorflow as tf

with tf.device('/CPU:0'):
    a = tf.constant([1.0, 2.0, 3.0])

with tf.device('/GPU:0'):
    b = tf.constant([4.0, 5.0, 6.0])

print(a.device)
print(b.device)
Aa.device shows GPU device string, b.device shows CPU device string
BBoth a.device and b.device show CPU device string
Ca.device shows CPU device string, b.device shows GPU device string
DBoth a.device and b.device show GPU device string
Attempts:
2 left
💡 Hint
Tensors are placed on the device specified by the tf.device context manager.
🧠 Conceptual
intermediate
2:00remaining
Tensor operations device fallback
If you create a tensor on GPU but perform an operation with a tensor on CPU, where will TensorFlow perform the operation?
AOn CPU, because one tensor is on CPU
BOn GPU, because one tensor is on GPU
CTensorFlow raises an error due to device mismatch
DTensorFlow automatically copies tensors to the same device and performs operation there
Attempts:
2 left
💡 Hint
TensorFlow tries to perform operations on the same device by copying tensors if needed.
Metrics
advanced
2:00remaining
Comparing training speed on CPU vs GPU
You train the same neural network model on CPU and GPU. Which metric difference best indicates GPU acceleration?
ALower training time per epoch on GPU than CPU
BHigher memory usage on CPU than GPU
CLower validation accuracy on GPU than CPU
DHigher training loss on GPU than CPU
Attempts:
2 left
💡 Hint
GPU is designed to speed up computations, so training time is key.
🔧 Debug
advanced
2:00remaining
TensorFlow device placement error
What error will this code raise and why? import tensorflow as tf with tf.device('/GPU:1'): x = tf.constant([1, 2, 3])
ANo error, tensor placed on GPU:1
BRuntimeError: GPU device '/GPU:1' not found
CSyntaxError: invalid syntax in device string
DValueError: Tensor shape mismatch
Attempts:
2 left
💡 Hint
Check if the specified GPU device exists on your machine.
Model Choice
expert
3:00remaining
Choosing device placement for large model training
You have a large deep learning model that does not fit into GPU memory. Which strategy is best to train it efficiently?
AUse model parallelism to split model across multiple GPUs
BPlace entire model on CPU to avoid GPU memory limits
CReduce batch size and place model on single GPU
DConvert model to float16 and place on CPU
Attempts:
2 left
💡 Hint
Splitting model across GPUs helps handle large models exceeding single GPU memory.