read command in Bash Scripting - Time & Space Complexity
We want to understand how the time taken by the read command changes as the input size grows.
Specifically, how does reading input line by line affect the total time?
Analyze the time complexity of the following code snippet.
#!/bin/bash
while read line; do
echo "$line"
done < input.txt
This script reads each line from a file and prints it out one by one.
- Primary operation: Reading and printing each line inside the
while readloop. - How many times: Once for every line in the input file.
Each line causes one read and one print operation, so the total work grows directly with the number of lines.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 reads and 10 prints |
| 100 | About 100 reads and 100 prints |
| 1000 | About 1000 reads and 1000 prints |
Pattern observation: The total time grows in a straight line as the number of lines increases.
Time Complexity: O(n)
This means the time grows directly in proportion to the number of lines read.
[X] Wrong: "The read command always takes constant time no matter how many lines there are."
[OK] Correct: Each line requires a separate read operation, so more lines mean more time.
Understanding how input reading scales helps you write scripts that handle large files efficiently and shows you can think about performance in real tasks.
"What if we read the entire file at once into a variable instead of line by line? How would the time complexity change?"