Longest Word in Dictionary Using Trie in DSA C++ - Time & Space Complexity
We want to understand how the time needed grows when finding the longest word in a dictionary using a Trie.
How does the number of steps change as the dictionary size and word lengths increase?
Analyze the time complexity of the following code snippet.
struct TrieNode {
TrieNode* children[26] = {nullptr};
bool isEnd = false;
};
void insert(TrieNode* root, const std::string& word) {
TrieNode* node = root;
for (char c : word) {
int idx = c - 'a';
if (!node->children[idx])
node->children[idx] = new TrieNode();
node = node->children[idx];
}
node->isEnd = true;
}
void dfs(TrieNode* node, TrieNode* root, std::string& current, std::string& longest) {
if (!node) return;
if (node != root && !node->isEnd) return;
if (current.length() > longest.length())
longest = current;
for (int i = 0; i < 26; i++) {
if (node->children[i]) {
current.push_back('a' + i);
dfs(node->children[i], root, current, longest);
current.pop_back();
}
}
}
This code inserts words into a Trie and uses depth-first search to find the longest word where all prefixes are present.
- Primary operation: Insertion loops over each character of every word, and DFS recursively visits Trie nodes.
- How many times: Insert runs once per word, each character once; DFS visits nodes up to total characters in all words.
As the number of words and their lengths grow, the total characters inserted and nodes visited increase roughly with the sum of all word lengths.
| Input Size (n words) | Approx. Operations (sum of word lengths) |
|---|---|
| 10 (avg length 5) | ~50 insert + ~50 DFS steps |
| 100 (avg length 5) | ~500 insert + ~500 DFS steps |
| 1000 (avg length 5) | ~5000 insert + ~5000 DFS steps |
Pattern observation: Operations grow roughly linearly with total characters in all words combined.
Time Complexity: O(N * L)
This means the time grows linearly with the number of words (N) times the average length of each word (L).
[X] Wrong: "The search for the longest word is just O(N) because we only check each word once."
[OK] Correct: We must consider the length of each word and the Trie traversal, so the total steps depend on both number and length of words.
Understanding this complexity helps you explain how tries efficiently handle prefix problems and why they are preferred over simple lists in many cases.
"What if we used a hash set instead of a Trie to check prefixes? How would the time complexity change?"