apply() with lambda functions in Pandas - Time & Space Complexity
We want to understand how the time needed to run apply() with lambda functions changes as the data grows.
How does the work increase when we have more rows in our data?
Analyze the time complexity of the following code snippet.
import pandas as pd
df = pd.DataFrame({'A': range(1000)})
df['B'] = df['A'].apply(lambda x: x * 2)
This code creates a column 'B' by doubling each value in column 'A' using apply() with a lambda function.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The lambda function is called once for each row in column 'A'.
- How many times: Exactly as many times as there are rows in the DataFrame.
As the number of rows increases, the total work grows in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 calls to lambda |
| 100 | 100 calls to lambda |
| 1000 | 1000 calls to lambda |
Pattern observation: Doubling the input size doubles the number of lambda calls and total work.
Time Complexity: O(n)
This means the time needed grows linearly with the number of rows in the DataFrame.
[X] Wrong: "Using apply() with a lambda is always slow because it loops over the data multiple times."
[OK] Correct: The lambda function is called once per row, so the loop happens only once. The time grows linearly, not more.
Understanding how apply() scales helps you explain your data processing choices clearly and confidently.
"What if we replaced the lambda with a vectorized operation instead? How would the time complexity change?"