Reading from files in C - Time & Space Complexity
When reading from files, we want to know how the time needed grows as the file size grows.
We ask: How does reading more data affect how long the program runs?
Analyze the time complexity of the following code snippet.
FILE *file = fopen("data.txt", "r");
char buffer[256];
while (fgets(buffer, sizeof(buffer), file)) {
// process the line
}
fclose(file);
This code reads a file line by line until the end, processing each line.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Reading each line from the file using fgets inside a while loop.
- How many times: Once for every line in the file, repeating until the file ends.
As the file gets bigger, the number of lines grows, so the loop runs more times.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 lines | About 10 reads |
| 100 lines | About 100 reads |
| 1000 lines | About 1000 reads |
Pattern observation: The time grows directly with the number of lines; double the lines, double the reads.
Time Complexity: O(n)
This means the time to read the file grows in a straight line with the number of lines in the file.
[X] Wrong: "Reading a file always takes the same time no matter the size."
[OK] Correct: The program reads each line one by one, so more lines mean more work and more time.
Understanding how file reading time grows helps you write efficient programs and explain your code clearly in interviews.
"What if we read the file character by character instead of line by line? How would the time complexity change?"