Why reshaping data matters in Pandas - Performance Analysis
When we reshape data in pandas, we change how it is organized. Understanding how long this takes helps us work efficiently with big data.
We want to know how the time needed grows as the data size grows.
Analyze the time complexity of the following code snippet.
import pandas as pd
df = pd.DataFrame({
'A': ['foo', 'bar', 'foo', 'bar'],
'B': ['one', 'one', 'two', 'two'],
'C': [1, 2, 3, 4],
'D': [5, 6, 7, 8]
})
reshaped = df.pivot(index='A', columns='B', values='C')
This code reshapes the data by pivoting it, turning some rows into columns based on values.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: pandas scans each row to place values in the new shape.
- How many times: Once for each row in the original data.
As the number of rows grows, pandas must process each row to reshape the data.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 operations |
| 100 | About 100 operations |
| 1000 | About 1000 operations |
Pattern observation: The work grows directly with the number of rows.
Time Complexity: O(n)
This means the time to reshape grows in a straight line with the number of rows.
[X] Wrong: "Reshaping data is instant no matter how big the data is."
[OK] Correct: pandas must look at each row to rearrange it, so bigger data takes more time.
Knowing how reshaping scales helps you explain your choices when working with data in real projects.
"What if we used pivot_table with aggregation instead of pivot? How would the time complexity change?"