Pearson correlation in SciPy - Time & Space Complexity
We want to understand how the time to calculate Pearson correlation changes as the size of the data grows.
How does the number of calculations increase when we have more data points?
Analyze the time complexity of the following code snippet.
import numpy as np
from scipy.stats import pearsonr
n = 1000 # example size
x = np.random.rand(n)
y = np.random.rand(n)
corr, p_value = pearsonr(x, y)
This code calculates the Pearson correlation coefficient between two arrays of length n.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Summation over all elements in the arrays to compute means, variances, and covariance.
- How many times: Each element is visited once in these summations, so n times.
As the number of data points n increases, the number of calculations grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 operations |
| 100 | About 100 operations |
| 1000 | About 1000 operations |
Pattern observation: Doubling the data size roughly doubles the work needed.
Time Complexity: O(n)
This means the time to compute Pearson correlation grows linearly with the number of data points.
[X] Wrong: "Calculating Pearson correlation takes quadratic time because it compares every pair of points."
[OK] Correct: The calculation uses sums over the data arrays, not pairwise comparisons, so it only needs to look at each data point once.
Understanding how Pearson correlation scales helps you explain performance when working with large datasets, a useful skill in data science roles.
"What if we calculated Pearson correlation for multiple pairs of arrays, each of length n? How would the time complexity change?"