Collision Handling Using Chaining in DSA C - Time & Space Complexity
When we use chaining to handle collisions in a hash table, we want to know how fast operations like search, insert, and delete run.
We ask: How does the time to do these operations grow as the number of items increases?
Analyze the time complexity of inserting a key into a hash table using chaining.
// Insert key into hash table with chaining
void insert(HashTable* ht, int key) {
int index = hashFunction(key, ht->size);
Node* newNode = createNode(key);
newNode->next = ht->buckets[index];
ht->buckets[index] = newNode;
}
This code adds a new key at the start of the linked list in the bucket found by the hash function.
In chaining, the main repeating operation happens when searching or deleting keys in a bucket's linked list.
- Primary operation: Traversing the linked list in a bucket.
- How many times: Depends on the number of keys in that bucket, which varies with total keys and hash distribution.
As more keys are added, linked lists in buckets get longer if the table size stays the same.
| Input Size (n) | Approx. Operations (list length) |
|---|---|
| 10 | 1 to 2 |
| 100 | 5 to 10 |
| 1000 | 50 to 100 |
Pattern observation: Without resizing, the linked list length grows roughly proportional to the number of keys, making operations slower.
Time Complexity: O(1 + n/k) where n is total keys and k is number of buckets.
This means operations take constant time plus time proportional to the average number of keys per bucket.
[X] Wrong: "Chaining always gives constant time operations regardless of input size."
[OK] Correct: If many keys hash to the same bucket, the linked list grows, making operations slower than constant time.
Understanding how chaining affects time helps you explain hash table performance clearly and shows you know how to handle collisions well.
What if we doubled the number of buckets as keys increase? How would the time complexity change?
