Longest Common Subsequence in DSA C - Time & Space Complexity
We want to understand how the time needed to find the longest common subsequence changes as the input strings get longer.
How does the work grow when the two strings become bigger?
Analyze the time complexity of the following code snippet.
int lcs(char *X, char *Y, int m, int n) {
if (m == 0 || n == 0) {
return 0;
}
if (X[m - 1] == Y[n - 1]) {
return 1 + lcs(X, Y, m - 1, n - 1);
} else {
int a = lcs(X, Y, m, n - 1);
int b = lcs(X, Y, m - 1, n);
return (a > b) ? a : b;
}
}
This code finds the length of the longest common subsequence between two strings using recursion.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Recursive calls that explore all combinations of substrings.
- How many times: Each call splits into two more calls unless base case is reached, leading to many repeated calls.
As the lengths of the two strings increase, the number of recursive calls grows very fast because the function tries many substring pairs.
| Input Size (m, n) | Approx. Operations |
|---|---|
| 10, 10 | Millions of calls |
| 20, 20 | Trillions of calls |
| 30, 30 | Quintillions of calls |
Pattern observation: The work roughly doubles with each increase in string length, growing exponentially.
Time Complexity: O(2^(m+n))
This means the time needed grows very fast, doubling with each added character in the strings.
[X] Wrong: "The recursive calls only happen once per substring pair, so it should be much faster."
[OK] Correct: The recursion repeats many calls for the same substring pairs multiple times, causing exponential growth.
Understanding this complexity helps you explain why naive recursion is slow and why dynamic programming is needed to solve this problem efficiently.
"What if we use a table to store results of substring pairs (memoization)? How would the time complexity change?"