Two Sum Problem Classic Hash Solution in DSA Python - Time & Space Complexity
We want to understand how fast the classic hash-based solution for the Two Sum problem runs as the input size grows.
Specifically, how does the number of steps change when we have more numbers to check?
Analyze the time complexity of the following code snippet.
def two_sum(nums, target):
seen = {}
for i, num in enumerate(nums):
complement = target - num
if complement in seen:
return [seen[complement], i]
seen[num] = i
return []
This code finds two numbers in the list that add up to the target using a hash map to check complements quickly.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Looping through each number once.
- How many times: Exactly once for each number in the list.
- Secondary operation: Checking if complement exists in the hash map (average constant time).
As the list gets longer, the code checks each number once and does a quick lookup for its complement.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks and 10 lookups |
| 100 | About 100 checks and 100 lookups |
| 1000 | About 1000 checks and 1000 lookups |
Pattern observation: The number of steps grows roughly in a straight line with the input size.
Time Complexity: O(n)
This means the time to find the two numbers grows directly in proportion to how many numbers we have.
[X] Wrong: "Since there is a loop inside a loop, the time must be O(n²)."
[OK] Correct: The code only has one loop; the hash map lookup is very fast and does not add another full loop.
Understanding this solution's time complexity shows you can use data structures smartly to speed up problems, a key skill in interviews.
"What if we used a list instead of a hash map for lookups? How would the time complexity change?"