Error logging patterns in Bash Scripting - Time & Space Complexity
When writing error logs in bash scripts, it is important to understand how the time taken grows as the number of errors increases.
We want to know how the logging process scales when many errors happen.
Analyze the time complexity of the following code snippet.
errors=("file not found" "permission denied" "timeout" "disk full")
for err in "${errors[@]}"; do
echo "Error: $err" >> error.log
sleep 0.1
echo "Logged: $err"
done
This script loops over an array of error messages and writes each one to a log file.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Looping over the error messages array and appending each error to the log file.
- How many times: Once for each error message in the array.
Each additional error message causes one more write to the log file.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 writes to the log file |
| 100 | 100 writes to the log file |
| 1000 | 1000 writes to the log file |
Pattern observation: The number of operations grows directly with the number of errors.
Time Complexity: O(n)
This means the time to log errors grows linearly as the number of errors increases.
[X] Wrong: "Logging multiple errors at once takes the same time as logging one error."
[OK] Correct: Each error requires a separate write operation, so more errors mean more time spent logging.
Understanding how error logging scales helps you write scripts that handle many errors efficiently and avoid slowdowns in real systems.
What if we buffered all error messages and wrote them to the log file in one go? How would the time complexity change?