0
0
NodejsComparisonBeginner · 4 min read

Stream vs Buffer in Node.js: Key Differences and Usage

In Node.js, a Buffer is a fixed-size chunk of memory used to store raw binary data, while a Stream is a sequence of data chunks that can be read or written over time. Streams handle large or continuous data efficiently by processing it piece-by-piece, whereas Buffers hold data all at once in memory.
⚖️

Quick Comparison

Here is a quick side-by-side look at Stream and Buffer in Node.js to understand their main differences.

FactorBufferStream
Data HandlingStores data all at once in memoryProcesses data piece-by-piece over time
SizeFixed size, limited by memoryCan handle large or infinite data
Use CaseSmall or known-size data chunksLarge files, network data, or continuous input
Memory EfficiencyLess efficient for big dataMore efficient, uses less memory
API TypeMostly synchronousMostly asynchronous with events
ExampleReading a small file into memoryReading a large file in chunks
⚖️

Key Differences

A Buffer in Node.js is like a container that holds a fixed amount of raw data in memory. It is useful when you know the exact size of data you want to work with or when you need to manipulate binary data directly. Buffers are mostly synchronous and hold all data at once, which can be a problem if the data is very large because it can consume a lot of memory.

On the other hand, a Stream is like a flowing river of data that you can read or write bit by bit. Streams are asynchronous and emit events as data arrives, allowing your program to start processing data immediately without waiting for everything to load. This makes streams ideal for handling large files, network requests, or any continuous data source efficiently.

In summary, use Buffer when you need quick access to a fixed chunk of data, and use Stream when working with large or ongoing data to save memory and improve performance.

⚖️

Code Comparison

nodejs
import { readFileSync } from 'node:fs';

// Using Buffer to read a whole file at once
const dataBuffer = readFileSync('example.txt');
console.log('Buffer data length:', dataBuffer.length);
console.log('Buffer content:', dataBuffer.toString());
Output
Buffer data length: 123 Buffer content: Hello, this is the content of example.txt file.
↔️

Stream Equivalent

nodejs
import { createReadStream } from 'node:fs';

// Using Stream to read the same file in chunks
const stream = createReadStream('example.txt', { encoding: 'utf8' });

stream.on('data', (chunk) => {
  console.log('Received chunk:', chunk);
});

stream.on('end', () => {
  console.log('Finished reading file.');
});
Output
Received chunk: Hello, this is Received chunk: the content of Received chunk: example.txt file. Finished reading file.
🎯

When to Use Which

Choose Buffer when: you have small or fixed-size data that fits comfortably in memory and you need to access or manipulate it all at once.

Choose Stream when: you are dealing with large files, continuous data, or network communication where loading everything into memory is inefficient or impossible. Streams let you process data as it arrives, saving memory and improving responsiveness.

Key Takeaways

Buffers hold fixed-size data in memory, suitable for small or known-size data.
Streams process data piece-by-piece asynchronously, ideal for large or continuous data.
Use streams to save memory and handle big files or network data efficiently.
Buffers provide quick, direct access to all data but can consume more memory.
Choose based on data size and processing needs: Buffer for small, Stream for large.