NaN and None in Pandas - Time & Space Complexity
We want to understand how the time needed to handle missing data in pandas changes as the data grows.
Specifically, how operations involving NaN and None values scale with the size of the data.
Analyze the time complexity of the following code snippet.
import pandas as pd
import numpy as np
df = pd.DataFrame({
'A': [1, 2, None, 4, np.nan],
'B': [np.nan, 2, 3, None, 5]
})
# Check for missing values
missing = df.isna()
# Fill missing values
filled = df.fillna(0)
This code creates a DataFrame with missing values, checks which values are missing, and fills them with zero.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: pandas checks each cell to see if it is NaN or None.
- How many times: Once for every cell in the DataFrame (rows x columns).
As the number of rows and columns grows, the number of cells to check grows too.
| Input Size (n cells) | Approx. Operations |
|---|---|
| 10 | About 10 checks |
| 100 | About 100 checks |
| 1000 | About 1000 checks |
Pattern observation: The work grows directly with the number of cells; doubling cells doubles the work.
Time Complexity: O(n)
This means the time to check or fill missing values grows in a straight line with the number of cells.
[X] Wrong: "Checking for missing values is instant no matter how big the data is."
[OK] Correct: Each cell must be checked, so bigger data means more work and more time.
Understanding how missing data operations scale helps you write efficient data cleaning code, a key skill in data science roles.
"What if we only check for missing values in one column instead of the whole DataFrame? How would the time complexity change?"