Why S3 matters for object storage in AWS - Performance Analysis
We want to understand how the time to store and retrieve objects in S3 changes as we add more objects.
How does the number of objects affect the speed of operations?
Analyze the time complexity of uploading multiple objects to an S3 bucket.
// Upload multiple files to S3 bucket
for (let i = 0; i < n; i++) {
s3.putObject({
Bucket: 'my-bucket',
Key: `file_${i}.txt`,
Body: 'file content'
}, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
}
This sequence uploads n separate objects to the same S3 bucket, one by one.
- Primary operation: The
putObjectAPI call to upload each file. - How many times: Exactly once per file, so n times for n files.
Each new file means one more upload call, so the total work grows directly with the number of files.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 uploads |
| 100 | 100 uploads |
| 1000 | 1000 uploads |
Pattern observation: The number of upload operations grows linearly as we add more files.
Time Complexity: O(n)
This means the time to upload all files grows in direct proportion to how many files you have.
[X] Wrong: "Uploading many files at once takes the same time as uploading one file."
[OK] Correct: Each file requires its own upload call, so more files mean more time overall.
Understanding how operations scale with input size helps you design efficient storage solutions and explain your reasoning clearly in interviews.
What if we uploaded multiple files in parallel instead of one by one? How would the time complexity change?