Data transfer cost awareness in AWS - Time & Space Complexity
When moving data in cloud systems, the cost and time can grow as more data moves between services or regions.
We want to understand how data transfer operations increase as the amount of data grows.
Analyze the time complexity of transferring multiple files from one AWS region to another.
// Example: Copy multiple S3 objects from one region to another
for (let i = 0; i < files.length; i++) {
s3.copyObject({
Bucket: destinationBucket,
CopySource: `${sourceBucket}/${files[i]}`,
Key: files[i]
}).promise();
}
This sequence copies each file one by one from a source bucket in one region to a destination bucket in another region.
Look at what repeats as the number of files grows.
- Primary operation: The
copyObjectAPI call for each file. - How many times: Once per file in the list.
Each additional file means one more copy operation across regions.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 10 copyObject calls |
| 100 | 100 copyObject calls |
| 1000 | 1000 copyObject calls |
Pattern observation: The number of operations grows directly with the number of files.
Time Complexity: O(n)
This means the time and cost increase linearly as you transfer more files.
[X] Wrong: "Transferring many files at once costs the same as transferring one file."
[OK] Correct: Each file transfer is a separate operation that adds to total time and cost, so more files mean more work.
Understanding how data transfer scales helps you design cloud solutions that balance cost and performance well.
"What if we batch multiple files into a single archive before transferring? How would the time complexity change?"