Dropping missing values (dropna) in Data Analysis Python - Time & Space Complexity
When we remove missing values from data, we want to know how long it takes as the data grows.
How does the time needed change when we have more rows?
Analyze the time complexity of the following code snippet.
import pandas as pd
data = pd.DataFrame({
'A': [1, 2, None, 4],
'B': [None, 2, 3, 4],
'C': [1, None, None, 4]
})
clean_data = data.dropna()
This code removes all rows that have any missing values from the DataFrame.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Checking each cell in the DataFrame for missing values.
- How many times: Once for every cell (row x column) in the data.
As the number of rows grows, the time to check all cells grows proportionally.
| Input Size (rows) | Approx. Operations (checks) |
|---|---|
| 10 | 10 x columns |
| 100 | 100 x columns |
| 1000 | 1000 x columns |
Pattern observation: The time grows linearly with the number of rows.
Time Complexity: O(n * m)
This means the time to drop missing values grows directly with the number of rows and columns in the data.
[X] Wrong: "Dropping missing values takes the same time no matter how big the data is."
[OK] Correct: The method must check every row and column to find missing values, so more data means more work.
Understanding how data cleaning steps like dropping missing values scale helps you explain your approach clearly and confidently.
"What if we only dropped rows missing values in a single column? How would the time complexity change?"