Reading files line by line (while read) in Bash Scripting - Time & Space Complexity
When reading a file line by line in a bash script, it's important to know how the time to finish grows as the file gets bigger.
We want to understand how the script's running time changes when the file has more lines.
Analyze the time complexity of the following code snippet.
while IFS= read -r line; do
echo "$line"
done < input.txt
This script reads each line from a file named input.txt and prints it to the screen.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Reading and printing each line inside the
while readloop. - How many times: Once for every line in the file.
As the number of lines in the file grows, the script does more reading and printing steps, one per line.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 reads and prints |
| 100 | About 100 reads and prints |
| 1000 | About 1000 reads and prints |
Pattern observation: The work grows directly with the number of lines. Double the lines, double the work.
Time Complexity: O(n)
This means the script takes time proportional to the number of lines in the file.
[X] Wrong: "Reading a file line by line is always constant time because it just reads one line at a time."
[OK] Correct: Even though it reads one line at a time, the total time depends on how many lines there are. More lines mean more reads, so time grows with file size.
Understanding how loops over file lines scale helps you explain script efficiency clearly and confidently in real-world tasks and interviews.
"What if we changed the script to read the whole file at once into a variable before processing? How would the time complexity change?"