Collision handling (open addressing) in Data Structures Theory - Time & Space Complexity
When storing data in a hash table, sometimes two items want the same spot. Open addressing is one way to handle this by finding another spot.
We want to know how the time to find or insert items grows as the table fills up.
Analyze the time complexity of inserting an item using linear probing in open addressing.
function insert(key, table) {
let index = hash(key) % table.size;
while (table[index] !== undefined) {
index = (index + 1) % table.size; // move to next slot
}
table[index] = key;
}
This code tries to place a key in the hash table. If the spot is taken, it checks the next spot until it finds an empty one.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Checking table slots one by one (linear probing).
- How many times: Up to the number of slots checked until an empty one is found, which depends on how full the table is.
As the table fills, more slots may be checked before finding an empty one.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 (mostly empty) | About 1 to 2 checks |
| 100 (half full) | Several checks, maybe 5 to 10 |
| 1000 (almost full) | Many checks, possibly hundreds |
Pattern observation: The more full the table, the longer it takes to find an empty slot, so time grows as the load increases.
Time Complexity: O(n)
This means in the worst case, you might check many slots one by one, so time grows linearly with the number of items.
[X] Wrong: "Insertion always takes constant time because hashing is fast."
[OK] Correct: When many slots are filled, you may need to check many places before finding an empty one, making insertion slower.
Understanding how open addressing affects time helps you explain trade-offs in hash tables clearly and shows you grasp real-world data structure behavior.
What if we changed linear probing to quadratic probing? How would the time complexity change?