Collision Handling Using Chaining in DSA Python - Time & Space Complexity
When we use chaining to handle collisions in a hash table, we want to know how fast operations like search, insert, and delete run.
We ask: How does the time to do these operations grow as the number of items increases?
Analyze the time complexity of the following code snippet.
class HashTable:
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(size)]
def insert(self, key, value):
index = hash(key) % self.size
self.table[index].append((key, value))
def search(self, key):
index = hash(key) % self.size
for k, v in self.table[index]:
if k == key:
return v
return None
This code shows a hash table using chaining where each bucket holds a list of key-value pairs.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Looping through the linked list (chain) at a bucket during search or insert.
- How many times: Depends on the number of items in that bucket, which grows with collisions.
As more items are added, chains get longer if the table size stays the same.
| Input Size (n) | Approx. Operations (search in chain) |
|---|---|
| 10 | About 1 (if table size ~10, chains short) |
| 100 | About 10 (chains longer if table size unchanged) |
| 1000 | About 100 (chains much longer without resizing) |
Pattern observation: If the table size stays fixed, operations grow roughly proportional to the number of items per bucket.
Time Complexity: O(1 + n/k)
This means the time depends on the average chain length, which is total items divided by table size.
[X] Wrong: "Hash table operations are always O(1) no matter what."
[OK] Correct: If many items collide into the same bucket, the chain grows and operations slow down, making time depend on chain length.
Understanding how chaining affects time helps you explain hash table performance clearly and shows you know how to handle collisions well.
"What if we resize the hash table when chains get too long? How would that change the time complexity?"