Why file handling is required in C++ - Performance Analysis
We want to understand how the time cost grows when a program reads or writes files.
How does the program's running time change as the file size or number of file operations increases?
Analyze the time complexity of the following code snippet.
#include <fstream>
#include <iostream>
#include <string>
using namespace std;
int main() {
ifstream file("data.txt");
string line;
while (getline(file, line)) {
cout << line << endl;
}
file.close();
return 0;
}
This code reads a file line by line and prints each line to the screen.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Reading each line from the file inside the while loop.
- How many times: Once for every line in the file.
As the file gets bigger, the program reads more lines, so it takes more time.
| Input Size (lines) | Approx. Operations |
|---|---|
| 10 | 10 reads and prints |
| 100 | 100 reads and prints |
| 1000 | 1000 reads and prints |
Pattern observation: The time grows roughly in direct proportion to the number of lines.
Time Complexity: O(n)
This means the time to read the file grows linearly with the number of lines in the file.
[X] Wrong: "Reading a file always takes the same time no matter how big it is."
[OK] Correct: The program reads each line one by one, so bigger files take more time to process.
Understanding how file reading time grows helps you write programs that handle data efficiently and avoid surprises with large files.
"What if we read the file word by word instead of line by line? How would the time complexity change?"