merge() for SQL-like joins in Pandas - Time & Space Complexity
When we join two tables using pandas merge(), it takes some time depending on the size of the tables.
We want to understand how the time needed grows as the tables get bigger.
Analyze the time complexity of the following code snippet.
import pandas as pd
# Two dataframes with n and m rows
df1 = pd.DataFrame({'key': range(n), 'value1': range(n)})
df2 = pd.DataFrame({'key': range(m), 'value2': range(m)})
# Merge on the key column
df_merged = pd.merge(df1, df2, on='key', how='inner')
This code merges two dataframes on a common column called 'key' using an inner join.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Hash-based matching of keys between the two dataframes.
- How many times: Each row in both dataframes is hashed and/or looked up once.
As the number of rows in both dataframes grows, the work to find matching keys grows too.
| Input Size (n, m) | Approx. Operations |
|---|---|
| 10, 10 | About 10 + 10 = 20 operations |
| 100, 100 | About 100 + 100 = 200 operations |
| 1000, 1000 | About 1000 + 1000 = 2000 operations |
Pattern observation: The number of operations grows roughly by the sum of the sizes of the two dataframes.
Time Complexity: O(n + m)
This means the time to merge grows roughly by adding the number of rows in both tables.
[X] Wrong: "Merging two dataframes always takes time proportional to the size of just one dataframe."
[OK] Correct: The merge compares rows from both dataframes, so the time depends on both sizes, not just one.
Understanding how merge time grows helps you explain performance in data tasks and shows you think about efficiency clearly.
"What if we merge on a column that is already sorted or indexed? How would the time complexity change?"