Internal vs external fragmentation in Operating Systems - Performance Comparison
When studying fragmentation in operating systems, it's important to see how wasted space grows as memory requests increase.
We want to understand how the cost of wasted memory changes with more allocations and deallocations.
Analyze the time complexity of memory allocation and fragmentation checking.
for each memory_request in requests:
allocate memory block
if block size > requested size:
internal_fragmentation += block size - requested size
check free memory blocks for gaps
external_fragmentation = sum of small free gaps
This code simulates allocating memory blocks and measuring internal and external fragmentation.
Look at what repeats as memory requests grow.
- Primary operation: Looping over each memory request to allocate and check fragmentation.
- How many times: Once per memory request, plus scanning free blocks to find gaps.
As the number of memory requests increases, the time to allocate and check fragmentation grows.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 allocations and checks |
| 100 | About 100 allocations and checks |
| 1000 | About 1000 allocations and checks |
Pattern observation: The work grows roughly in direct proportion to the number of requests.
Time Complexity: O(n)
This means the time to handle fragmentation grows linearly as more memory requests happen.
[X] Wrong: "Fragmentation only happens once and does not grow with more allocations."
[OK] Correct: Fragmentation can increase as more memory blocks are allocated and freed, causing more wasted space over time.
Understanding how fragmentation grows helps you explain memory management challenges clearly and shows you can think about resource costs as systems scale.
"What if we used a different allocation strategy that always fits memory exactly? How would the time complexity of fragmentation checking change?"