Why hashes are used everywhere in Ruby - Performance Analysis
We want to understand how fast operations on Ruby hashes run as the data grows.
Specifically, how does the time to find or add items change when the hash gets bigger?
Analyze the time complexity of the following Ruby code using a hash.
my_hash = {}
1000.times do |i|
my_hash[i] = i * 2
end
value = my_hash[500]
This code adds 1000 key-value pairs to a hash and then looks up one value by its key.
Look at what repeats and what matters most.
- Primary operation: Adding items to the hash inside a loop.
- How many times: 1000 times for insertion, 1 time for lookup.
As we add more items, each insertion and lookup stays quick.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 insertions, each very fast |
| 100 | About 100 insertions, still very fast each |
| 1000 | About 1000 insertions, each still quick |
Pattern observation: The time per insertion or lookup does not grow much as the hash gets bigger.
Time Complexity: O(1)
This means finding or adding an item in a hash takes about the same time no matter how many items are inside.
[X] Wrong: "Looking up a key in a hash takes longer as the hash gets bigger."
[OK] Correct: Hashes use a clever way to jump directly to the spot, so lookup time stays almost the same even if the hash grows.
Understanding how hashes work fast helps you explain why they are used everywhere in Ruby and how to choose the right data structure.
"What if we used an array instead of a hash for lookups? How would the time complexity change?"