Token bucket algorithm in Rest API - Time & Space Complexity
We want to understand how the time it takes to check and update tokens grows as requests come in.
How does the algorithm handle more requests over time?
Analyze the time complexity of the following code snippet.
def allow_request(bucket, tokens_needed=1):
now = current_time()
elapsed = now - bucket['last_checked']
bucket['tokens'] = min(bucket['capacity'], bucket['tokens'] + elapsed * bucket['rate'])
bucket['last_checked'] = now
if bucket['tokens'] >= tokens_needed:
bucket['tokens'] -= tokens_needed
return True
return False
This code checks if enough tokens are available to allow a request and updates the token count based on elapsed time.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Simple arithmetic and comparisons done once per request.
- How many times: Once each time a request is checked.
The work done for each request stays about the same no matter how many requests come in.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 simple checks and updates |
| 100 | 100 simple checks and updates |
| 1000 | 1000 simple checks and updates |
Pattern observation: The time grows linearly with the number of requests, but each request takes a fixed small amount of work.
Time Complexity: O(1)
This means each request is handled in constant time, no matter how many requests happen.
[X] Wrong: "The token bucket algorithm must loop over all tokens or requests, so it takes longer as requests increase."
[OK] Correct: The algorithm only updates counts using simple math each time a request comes in, without looping over past requests or tokens.
Understanding this helps you explain how rate limiting works efficiently in real systems, showing you can reason about performance in practical code.
"What if we changed the algorithm to store each token as an individual object and check them one by one? How would the time complexity change?"