Hash Table Concept and Hash Functions in DSA Python - Time & Space Complexity
We want to understand how fast a hash table can find or store data using hash functions.
The question is: how does the time to do these operations change as we add more items?
Analyze the time complexity of the following code snippet.
class SimpleHashTable:
def __init__(self, size):
self.size = size
self.table = [None] * size
def hash_function(self, key):
return hash(key) % self.size
def insert(self, key, value):
index = self.hash_function(key)
self.table[index] = value
def get(self, key):
index = self.hash_function(key)
return self.table[index]
This code creates a simple hash table with a basic hash function and supports insert and get operations.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Calculating the hash and accessing the array index.
- How many times: Each insert or get does this once, no loops inside these methods.
Each operation does a quick calculation and direct access, so time stays about the same even if the table grows.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 1 operation per insert/get |
| 100 | 1 operation per insert/get |
| 1000 | 1 operation per insert/get |
Pattern observation: The time does not increase with more items, it stays constant.
Time Complexity: O(1)
This means each insert or get takes about the same time no matter how many items are stored.
[X] Wrong: "Hash table operations always take longer as more items are added because the array gets bigger."
[OK] Correct: The hash function directly finds the spot, so time stays constant unless many items collide in the same spot.
Understanding hash tables and their time complexity shows you know how to store and find data quickly, a key skill in many coding problems.
"What if many keys hash to the same index and we handle collisions by searching a list? How would the time complexity change?"