Top K Frequent Elements Using Heap in DSA Javascript - Time & Space Complexity
We want to understand how the time needed grows when finding the top K frequent elements using a heap.
How does the work change as the list of numbers gets bigger?
Analyze the time complexity of the following code snippet.
const topKFrequent = (nums, k) => {
const freqMap = new Map();
for (const num of nums) {
freqMap.set(num, (freqMap.get(num) || 0) + 1);
}
const heap = [];
for (const [num, freq] of freqMap.entries()) {
heap.push([freq, num]);
heap.sort((a, b) => a[0] - b[0]);
if (heap.length > k) heap.shift();
}
return heap.map(pair => pair[1]);
};
This code counts how often each number appears, then keeps the top K numbers with the highest counts using a small heap.
Look at the parts that repeat work:
- Primary operation: Loop over all numbers to count frequency (once).
- Secondary operation: Loop over unique numbers to build and maintain the heap.
- Dominant operation: Sorting the heap inside the loop, which happens for each unique number.
As the input list grows, the number of unique numbers also grows, affecting the heap operations.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Counting: 10 steps, Heap ops: ~10 * log(k) |
| 100 | Counting: 100 steps, Heap ops: ~100 * log(k) |
| 1000 | Counting: 1000 steps, Heap ops: ~1000 * log(k) |
Pattern observation: Counting grows linearly with input size, heap operations grow linearly with unique numbers times log of k.
Time Complexity: O(n log k)
This means the time grows mostly with the input size times the log of k, which is usually much smaller than n.
[X] Wrong: "Sorting the entire list of numbers is needed to find top K."
[OK] Correct: Sorting all numbers would take much longer. Using a heap keeps only the top K, saving time.
Understanding how to use heaps to find top K elements efficiently is a valuable skill that shows you can handle large data smartly.
"What if we used a max heap instead of a min heap? How would the time complexity change?"