Logging framework in Bash Scripting - Time & Space Complexity
When using a logging framework in bash scripts, it's important to understand how the time to write logs grows as the number of log messages increases.
We want to know how the script's running time changes when it writes many log entries.
Analyze the time complexity of the following bash logging function used repeatedly.
log_message() {
local level="$1"
local message="$2"
echo "$(date '+%Y-%m-%d %H:%M:%S') [$level] $message" >> script.log
}
for i in $(seq 1 $n); do
log_message "INFO" "Processing item $i"
done
This code writes a timestamped log message to a file for each item processed in a loop.
- Primary operation: Writing a log message to the file inside the loop.
- How many times: The loop runs
ntimes, so the log write happensntimes.
Each new log message adds one write operation, so the total work grows directly with the number of messages.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 log writes |
| 100 | 100 log writes |
| 1000 | 1000 log writes |
Pattern observation: Doubling the number of log messages doubles the total time spent writing logs.
Time Complexity: O(n)
This means the time to write logs grows linearly with the number of log messages.
[X] Wrong: "Writing logs is instant and does not affect script speed regardless of how many messages are logged."
[OK] Correct: Each log write takes time, so more messages mean more time spent writing, which slows down the script.
Understanding how logging affects script performance shows you can write efficient scripts and manage resources well, a useful skill in real projects.
What if we buffered all log messages and wrote them to the file once at the end? How would the time complexity change?