When you read a file using commands like cat or head in Linux, why is this operation often considered to have constant time complexity?
Think about how operating systems use memory to speed up repeated file reads.
Linux uses a page cache to store recently accessed file data in RAM. When you read a file, if its data is already cached, the read happens very fast, appearing as constant time for small files.
Assume you have a text file example.txt with 100 lines. You run head -n 5 example.txt twice in a row. What is the expected output of the second command?
head -n 5 example.txt head -n 5 example.txt
Think about what the head command does and if caching affects output content.
The head command prints the first 5 lines of the file each time it runs. Caching speeds up reading but does not change the output.
Which of the following commands reads a file data.log efficiently by using buffering and avoids loading the entire file into memory at once?
Consider which command allows control over block size and partial reading.
The dd command with block size and count reads a fixed amount efficiently without loading the whole file. Others either read entire file or stream continuously.
You run cat /dev/random to read random bytes from the system, but the command seems to hang and never finishes. Why?
Think about how special device files behave differently from normal files.
/dev/random blocks when the system entropy pool is low, so reading from it can hang until enough randomness is available.
You want to measure how fast your system reads a large file bigfile.bin. Which command pipeline correctly measures the read speed without writing output to disk?
Consider how to discard output and measure total time taken to read the whole file.
Using time cat bigfile.bin > /dev/null reads the entire file and discards output, allowing time to measure the read speed accurately.