0
0
PyTorchml~20 mins

Bounding box handling in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Bounding Box Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of bounding box area calculation
What is the output of the following code that calculates the area of bounding boxes stored as (x_min, y_min, x_max, y_max)?
PyTorch
import torch
boxes = torch.tensor([[1, 2, 4, 6], [0, 0, 3, 3], [2, 2, 5, 5]])
widths = boxes[:, 2] - boxes[:, 0]
heights = boxes[:, 3] - boxes[:, 1]
areas = widths * heights
print(areas.tolist())
A[15, 9, 12]
B[12, 9, 9]
C[12, 6, 9]
D[9, 9, 9]
Attempts:
2 left
💡 Hint
Remember area = width * height, where width = x_max - x_min and height = y_max - y_min.
Model Choice
intermediate
2:00remaining
Choosing the right bounding box format for IoU calculation
Which bounding box format is best suited for calculating Intersection over Union (IoU) directly without conversion?
ATop-left and bottom-right corners (x_min, y_min, x_max, y_max)
BTop-left corner with width and height (x_min, y_min, w, h)
CCenter coordinates with corner points (cx, cy, x_min, y_min)
DCenter coordinates with width and height (cx, cy, w, h)
Attempts:
2 left
💡 Hint
IoU requires overlapping area calculation which is easier with corner coordinates.
Hyperparameter
advanced
2:00remaining
Effect of IoU threshold on Non-Maximum Suppression (NMS)
What is the effect of increasing the IoU threshold parameter in Non-Maximum Suppression when filtering bounding boxes?
ABoxes are sorted by confidence score instead of IoU
BFewer boxes are kept, resulting in stricter suppression
CNo change in number of boxes kept
DMore boxes are kept, allowing more overlapping detections
Attempts:
2 left
💡 Hint
Higher IoU threshold means boxes must overlap more to be suppressed.
🔧 Debug
advanced
2:00remaining
Debugging incorrect bounding box clipping
Given bounding boxes and image size, which option correctly clips boxes to image boundaries without errors?
PyTorch
import torch
boxes = torch.tensor([[50, 30, 200, 180], [-10, 20, 100, 150], [30, 40, 500, 400]])
image_size = (300, 300)  # height, width
# Clip boxes so coordinates stay within image boundaries
A
boxes[:, 0] = boxes[:, 0].clamp(0, image_size[1])
boxes[:, 1] = boxes[:, 1].clamp(0, image_size[0])
boxes[:, 2] = boxes[:, 2].clamp(0, image_size[1])
boxes[:, 3] = boxes[:, 3].clamp(0, image_size[0])
B
boxes[:, 0] = boxes[:, 0].clamp(0, image_size[0])
boxes[:, 1] = boxes[:, 1].clamp(0, image_size[1])
boxes[:, 2] = boxes[:, 2].clamp(0, image_size[0])
boxes[:, 3] = boxes[:, 3].clamp(0, image_size[1])
C
boxes[:, 0] = boxes[:, 0].clamp(0, image_size[1]-1)
boxes[:, 1] = boxes[:, 1].clamp(0, image_size[0]-1)
boxes[:, 2] = boxes[:, 2].clamp(0, image_size[1]-1)
boxes[:, 3] = boxes[:, 3].clamp(0, image_size[0]-1)
D
boxes[:, 0] = boxes[:, 0].clamp(0, image_size[0]-1)
boxes[:, 1] = boxes[:, 1].clamp(0, image_size[0]-1)
boxes[:, 2] = boxes[:, 2].clamp(0, image_size[1]-1)
boxes[:, 3] = boxes[:, 3].clamp(0, image_size[1]-1)
Attempts:
2 left
💡 Hint
Remember image_size is (height, width). x coordinates clamp to width, y coordinates clamp to height.
🧠 Conceptual
expert
3:00remaining
Understanding bounding box regression targets in object detection
In object detection, bounding box regression predicts offsets relative to anchor boxes. Which statement correctly describes why offsets are predicted instead of absolute coordinates?
AOffsets normalize the prediction space, making training more stable and easier to learn
BAbsolute coordinates are always better because they are direct and simpler
COffsets reduce the number of bounding boxes needed during inference
DAbsolute coordinates allow the model to ignore anchor boxes completely
Attempts:
2 left
💡 Hint
Think about how predicting relative changes helps the model focus on small adjustments.