Date, time, and timestamp types in PostgreSQL - Time & Space Complexity
We want to understand how the time it takes to work with date and time types changes as we handle more data.
Specifically, how does the processing time grow when using date, time, or timestamp fields in queries?
Analyze the time complexity of the following query filtering rows by a timestamp range.
SELECT *
FROM events
WHERE event_time >= '2024-01-01 00:00:00'
AND event_time < '2024-02-01 00:00:00';
This query selects all events that happened in January 2024 by checking the timestamp column.
Look for repeated checks or scans in the query.
- Primary operation: Scanning each row's
event_timeto check if it falls in the date range. - How many times: Once per row in the
eventstable.
As the number of rows grows, the number of timestamp checks grows too.
| Input Size (n rows) | Approx. Operations (timestamp checks) |
|---|---|
| 10 | 10 |
| 100 | 100 |
| 1000 | 1000 |
Pattern observation: The number of operations grows directly with the number of rows.
Time Complexity: O(n)
This means the time to run the query grows in a straight line as the number of rows increases.
[X] Wrong: "Checking timestamps is instant and does not depend on data size."
[OK] Correct: Each row's timestamp must be checked, so more rows mean more checks and more time.
Understanding how filtering by date or time scales helps you write efficient queries and explain performance clearly.
"What if we add an index on the event_time column? How would the time complexity change?"