0
0
NumPydata~20 mins

Broadcasting performance implications in NumPy - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Broadcasting Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Broadcasting with large arrays performance
What is the output of the following code snippet regarding the shape of the result array?
NumPy
import numpy as np

large_array = np.ones((1000, 1000))
small_array = np.array([1, 2, 3, 4, 5])
result = large_array + small_array
print(result.shape)
A(1000, 5)
BValueError: operands could not be broadcast together
C(1000, 1000)
D(5, 1000)
Attempts:
2 left
💡 Hint
Remember how broadcasting works when adding arrays of different shapes.
data_output
intermediate
2:00remaining
Memory usage difference in broadcasting
Given these two operations, which one uses less memory during execution?
NumPy
import numpy as np

x = np.ones((1000, 1000))
y = np.ones((1, 1000))

result1 = x + y
result2 = x + y.T
Aresult2 uses less memory because y.T broadcasts along columns
BBoth use more memory than original arrays because broadcasting creates copies
Cresult1 uses less memory because y broadcasts along rows
DBoth use the same memory because broadcasting does not copy data
Attempts:
2 left
💡 Hint
Broadcasting tries to avoid copying data by creating views.
visualization
advanced
3:00remaining
Visualizing broadcasting performance impact
Which option correctly plots the time taken to add arrays with and without broadcasting?
NumPy
import numpy as np
import matplotlib.pyplot as plt
import time

sizes = [10, 100, 1000, 5000]
time_broadcast = []
time_no_broadcast = []

for size in sizes:
    a = np.ones((size, size))
    b = np.ones((size, 1))
    start = time.time()
    c = a + b
    time_broadcast.append(time.time() - start)

    b2 = np.ones((size, size))
    start = time.time()
    c2 = a + b2
    time_no_broadcast.append(time.time() - start)

plt.plot(sizes, time_broadcast, label='Broadcasting')
plt.plot(sizes, time_no_broadcast, label='No Broadcasting')
plt.xlabel('Array size (N x N)')
plt.ylabel('Time (seconds)')
plt.legend()
plt.title('Performance: Broadcasting vs No Broadcasting')
plt.show()
AA scatter plot showing no difference in time
BA bar chart showing broadcasting is slower for all sizes
CA line plot showing broadcasting is faster for large arrays
DA pie chart showing time distribution between methods
Attempts:
2 left
💡 Hint
Broadcasting avoids copying large arrays, improving speed.
🔧 Debug
advanced
2:00remaining
Identify the cause of slow broadcasting operation
Why does this broadcasting operation run slower than expected?
NumPy
import numpy as np

x = np.ones((1000, 1000))
y = np.ones((1000, 1))

result = x + y.T  # Broadcasting with transpose
ABecause y.T creates a copy, preventing efficient broadcasting
BBecause shapes are incompatible and cause a runtime error
CBecause x and y.T have the same shape, so no broadcasting occurs
DBecause y.T is a view and broadcasting is always slow with views
Attempts:
2 left
💡 Hint
Check if transpose creates a copy or a view.
🚀 Application
expert
3:00remaining
Optimizing broadcasting in a real-world scenario
You have a dataset with shape (10000, 50) and a vector of shape (50,). You want to add the vector to each row efficiently. Which approach is best for performance?
AUse direct addition: dataset + vector
BUse a for loop to add vector to each row individually
CConvert vector to a list and add using list comprehension
DReshape vector to (50, 1) and add: dataset + vector.reshape(50, 1)
Attempts:
2 left
💡 Hint
Consider how broadcasting works with shapes and vectorization.