Array arithmetic (element-wise) in Data Analysis Python - Time & Space Complexity
When we do arithmetic on arrays, like adding or multiplying each number one by one, it takes some time. We want to know how this time changes when the array gets bigger.
How does the work grow as the array size grows?
Analyze the time complexity of the following code snippet.
import numpy as np
arr1 = np.array([1, 2, 3, 4, 5])
arr2 = np.array([10, 20, 30, 40, 50])
result = arr1 + arr2 # element-wise addition
This code adds two arrays by adding each pair of elements at the same position.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Adding each element from the first array to the corresponding element in the second array.
- How many times: Once for each element in the arrays, so as many times as the array length.
When the array size doubles, the number of additions also doubles because each element needs to be added.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 additions |
| 100 | 100 additions |
| 1000 | 1000 additions |
Pattern observation: The work grows in a straight line with the array size.
Time Complexity: O(n)
This means the time it takes grows directly in proportion to the number of elements in the arrays.
[X] Wrong: "Adding two arrays is instant and does not depend on size."
[OK] Correct: Even though the code looks simple, the computer must add each pair of numbers one by one, so bigger arrays take more time.
Understanding how element-wise operations scale helps you explain efficiency clearly and shows you know how data size affects performance.
"What if we used two-dimensional arrays instead of one-dimensional? How would the time complexity change?"