Report generation script in Bash Scripting - Time & Space Complexity
When running a report generation script, it is important to know how the time it takes grows as the data size grows.
We want to understand how the script's work increases when there are more items to process.
Analyze the time complexity of the following code snippet.
#!/bin/bash
files=/var/log/*.log
for file in $files; do
echo "Processing $file"
lines=$(wc -l < "$file")
echo "Lines: $lines"
# Simulate report generation
sleep 0.1
echo "Report generated for $file"
done
This script lists all log files in a folder, then for each file it counts lines and simulates generating a report.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Looping over each log file to process it.
- How many times: Once for each file found in the directory.
As the number of log files increases, the script runs the loop more times, doing similar work each time.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 file processes |
| 100 | About 100 file processes |
| 1000 | About 1000 file processes |
Pattern observation: The work grows directly with the number of files; doubling files doubles work.
Time Complexity: O(n)
This means the time to finish grows in a straight line with the number of files to process.
[X] Wrong: "The script runs in constant time because it just loops over files once."
[OK] Correct: Even though the loop runs once, it runs once per file, so more files mean more work and more time.
Understanding how loops affect time helps you explain script performance clearly and shows you can reason about scaling work.
"What if the script also read every line inside each file? How would the time complexity change?"