Log groups and log streams in AWS - Time & Space Complexity
When working with AWS logs, it is important to understand how the time to manage logs grows as you add more logs and streams.
We want to know how the number of API calls changes as we create or list log groups and streams.
Analyze the time complexity of the following operation sequence.
# Create a log group
aws logs create-log-group --log-group-name MyLogGroup
# For each application instance, create a log stream
for i in $(seq 1 $n); do
aws logs create-log-stream --log-group-name MyLogGroup --log-stream-name Stream_$i
done
# List all log streams in the log group
aws logs describe-log-streams --log-group-name MyLogGroup
This sequence creates one log group, then creates multiple log streams inside it, and finally lists all streams.
- Primary operation: Creating log streams inside the log group.
- How many times: Once for the log group, then n times for the log streams.
- Listing log streams is a single operation but returns data proportional to n.
As the number of log streams n increases, the number of create calls grows directly with n.
| Input Size (n) | Approx. Api Calls/Operations |
|---|---|
| 10 | 1 (log group) + 10 (streams) + 1 (list) = 12 |
| 100 | 1 + 100 + 1 = 102 |
| 1000 | 1 + 1000 + 1 = 1002 |
Pattern observation: The total API calls grow linearly as you add more log streams.
Time Complexity: O(n)
This means the time to create and list log streams grows directly in proportion to the number of streams.
[X] Wrong: "Creating many log streams happens in constant time because it is just one command."
[OK] Correct: Each log stream requires a separate API call, so the total time grows with the number of streams.
Understanding how AWS logging operations scale helps you design systems that handle logs efficiently as they grow.
"What if we batch create log streams instead of one by one? How would the time complexity change?"