Bigtable for time-series data in GCP - Time & Space Complexity
When using Bigtable to store time-series data, it is important to understand how the time to read or write data changes as the amount of data grows.
We want to know how the number of operations grows when we add more time points.
Analyze the time complexity of writing multiple time-series data points to Bigtable.
// Pseudocode for writing time-series data points
for (int i = 0; i < n; i++) {
bigtable.write(rowKeyForTime(i), dataPoint(i));
}
This sequence writes n data points, each with a unique timestamp as the row key, into Bigtable.
Identify the API calls, resource provisioning, data transfers that repeat.
- Primary operation: Single write API call to Bigtable for each data point.
- How many times: Once per data point, so n times.
Each new data point requires one write operation, so the total operations grow directly with the number of points.
| Input Size (n) | Approx. API Calls/Operations |
|---|---|
| 10 | 10 write calls |
| 100 | 100 write calls |
| 1000 | 1000 write calls |
Pattern observation: The number of operations increases in a straight line as input size grows.
Time Complexity: O(n)
This means the time to write all data points grows proportionally with the number of points.
[X] Wrong: "Writing multiple points at once will take the same time as writing one point."
[OK] Correct: Each point requires a separate write operation, so more points mean more total operations and more time.
Understanding how data volume affects operation count helps you design scalable systems and explain your choices clearly in interviews.
"What if we batch multiple time-series points into a single write call? How would the time complexity change?"