Data rotation and cleanup in Raspberry Pi - Time & Space Complexity
When we rotate and clean up data, we want to know how long it takes as the data grows.
We ask: how does the work increase when there is more data to handle?
Analyze the time complexity of the following code snippet.
# Assume logs is a list of log entries
max_logs = 100
if len(logs) > max_logs:
# Remove oldest logs to keep size
logs = logs[-max_logs:]
# Add new log entry
logs.append(new_log)
This code keeps only the newest 100 logs by removing older ones, then adds a new log.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Slicing the list to keep the last 100 logs.
- How many times: This happens once each time new data is added and the list is too long.
As the number of logs grows, the slicing operation always handles at most 100 items.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 (if over limit) |
| 100 | About 100 (at limit) |
| 1000 | Still about 100 (only last 100 kept) |
Pattern observation: The work stays roughly the same once the limit is reached, not growing with total data size.
Time Complexity: O(1)
This means the time to rotate and clean up data stays constant, no matter how big the total data is.
[X] Wrong: "Removing old logs takes longer as the total logs grow."
[OK] Correct: Because we only keep a fixed number of logs, the removal always handles the same small amount, so time does not grow with total logs.
Understanding how fixed-size data rotation keeps operations fast is a useful skill for managing data efficiently in real projects.
"What if we changed the max_logs limit to grow with input size? How would the time complexity change?"