Import from S3 in DynamoDB - Time & Space Complexity
When importing data from S3 into DynamoDB, it's important to understand how the time taken grows as the amount of data increases.
We want to know how the number of operations changes when we import more records.
Analyze the time complexity of the following import operation.
aws dynamodb import-table --input-format CSV \
--input-s3-uri s3://mybucket/data.csv \
--table-name MyTable
This command imports data from a CSV file stored in S3 into a DynamoDB table.
Look at what repeats during the import process.
- Primary operation: Reading each record from the S3 file and writing it into DynamoDB.
- How many times: Once for every record in the file.
As the number of records grows, the total work grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 read and write operations |
| 100 | About 100 read and write operations |
| 1000 | About 1000 read and write operations |
Pattern observation: The operations increase directly with the number of records.
Time Complexity: O(n)
This means the time to import grows in direct proportion to the number of records.
[X] Wrong: "Importing from S3 is always instant no matter the file size."
[OK] Correct: The import reads and writes each record, so larger files take more time.
Understanding how import time grows helps you explain performance expectations and plan data workflows confidently.
"What if the import process batches multiple records in one write? How would the time complexity change?"