Detecting missing values with isna() in Pandas - Time & Space Complexity
We want to understand how the time to find missing values changes as data grows.
How does checking for missing data scale with more rows?
Analyze the time complexity of the following code snippet.
import pandas as pd
data = pd.DataFrame({
'A': [1, 2, None, 4],
'B': [None, 2, 3, 4],
'C': [1, None, None, 4]
})
missing_mask = data.isna()
This code creates a table and checks each cell to see if it is missing (NaN).
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Checking each cell in the DataFrame for missing value.
- How many times: Once per cell, so number of rows times number of columns.
As the table gets bigger, the number of checks grows with total cells.
| Input Size (rows x columns) | Approx. Operations |
|---|---|
| 10 x 5 = 50 | 50 checks |
| 100 x 5 = 500 | 500 checks |
| 1000 x 5 = 5000 | 5000 checks |
Pattern observation: The work grows directly with the number of cells.
Time Complexity: O(n * m)
This means the time grows proportionally with the number of rows (n) times columns (m).
[X] Wrong: "Checking for missing values is constant time regardless of data size."
[OK] Correct: Each cell must be checked, so more data means more work.
Understanding how data size affects missing value detection helps you reason about data cleaning speed in real projects.
"What if we only check one column for missing values instead of the whole DataFrame? How would the time complexity change?"