B-tree index structure in DBMS Theory - Time & Space Complexity
When using a B-tree index, we want to know how fast it helps find data as the database grows.
We ask: How does the search time change when the number of records increases?
Analyze the time complexity of searching a value in a B-tree index.
-- Pseudocode for searching a B-tree index
function searchBTree(node, key) {
if node is leaf:
return find key in node.keys
else:
find child pointer where key fits
return searchBTree(child pointer, key)
}
This code searches for a key by moving down the tree from root to leaf.
Look at what repeats during the search:
- Primary operation: Moving down one level in the tree and searching keys in that node.
- How many times: Once per tree level until reaching a leaf.
As the number of records grows, the tree grows taller but slowly.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 2-3 node checks |
| 100 | About 3-4 node checks |
| 1000 | About 4-5 node checks |
Pattern observation: The number of steps grows slowly, roughly adding one step for every big jump in data size.
Time Complexity: O(log n)
This means the search time grows slowly as the data grows, making B-tree searches efficient even for large data.
[X] Wrong: "Searching a B-tree is as slow as scanning all records one by one."
[OK] Correct: B-tree uses a tree structure to jump quickly to the right place, so it does not check every record.
Understanding B-tree time complexity shows you know how databases keep searches fast, a key skill for working with real data.
"What if the B-tree nodes could hold twice as many keys? How would that affect the search time complexity?"