0
0
MLOpsdevops~10 mins

Data parallelism vs model parallelism in MLOps - Interactive Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to split data across multiple devices for parallel processing.

MLOps
distributed_data = dataset.[1](num_devices)
Drag options to blanks, or click blank then click option'
Abatch
Bsplit
Cshuffle
Drepeat
Attempts:
3 left
💡 Hint
Common Mistakes
Using batch instead of split causes incorrect data division.
2fill in blank
medium

Complete the code to assign different parts of the model to different devices.

MLOps
model = Model().to_device([1])
Drag options to blanks, or click blank then click option'
Acpu
Ball
Cgpu1
Dgpu0
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'all' tries to put the whole model everywhere, which is incorrect.
3fill in blank
hard

Fix the error in the code to correctly synchronize gradients across devices in data parallelism.

MLOps
optimizer.zero_grad()
loss.backward()
[1].all_reduce(gradients)
Drag options to blanks, or click blank then click option'
Atorch.distributed
Boptimizer
Cmodel
Ddevice
Attempts:
3 left
💡 Hint
Common Mistakes
Using model or optimizer instead of the distributed module causes runtime errors.
4fill in blank
hard

Fill both blanks to create a dictionary comprehension that maps device names to model parts for model parallelism.

MLOps
model_parts = {device: model.[1](device) for device in [2]
Drag options to blanks, or click blank then click option'
Ato_device
Bdevices
Cdevice_list
Dcpu
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'cpu' instead of a list causes errors.
5fill in blank
hard

Fill all three blanks to create a dictionary comprehension that maps device names to batch sizes for data parallelism.

MLOps
batch_sizes = {device: total_batch_size [1] len([2]) for device in [3]
Drag options to blanks, or click blank then click option'
A//
Bdevices
Cdevice_list
Dbatch_sizes
Attempts:
3 left
💡 Hint
Common Mistakes
Using '/' causes float division, which is not suitable for batch sizes.