Catching runtime errors in Javascript - Time & Space Complexity
When we catch runtime errors in JavaScript, we want to know how the time to handle errors grows as the program runs.
We ask: how does the cost of catching errors change when the code runs more or with bigger inputs?
Analyze the time complexity of the following code snippet.
try {
for (let i = 0; i < arr.length; i++) {
if (arr[i] === null) {
throw new Error('Null value found');
}
process(arr[i]);
}
} catch (error) {
console.error(error.message);
}
This code loops through an array, processes each item, and throws an error if a null is found, which is then caught.
- Primary operation: The for-loop that goes through each item in the array.
- How many times: Once for each element until a null is found or the end is reached.
The time to run grows as the array gets bigger because the loop checks each item one by one.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks and processes |
| 100 | About 100 checks and processes |
| 1000 | About 1000 checks and processes |
Pattern observation: The work grows steadily as the input size grows; more items mean more checks.
Time Complexity: O(n)
This means the time to catch errors grows in direct proportion to the number of items checked.
[X] Wrong: "Catching an error inside the loop makes the whole process take longer than checking all items normally."
[OK] Correct: The error is only caught when thrown, and the loop stops early; otherwise, the loop runs normally, so catching errors doesn't add extra time unless an error happens.
Understanding how error handling affects time helps you write reliable code and explain your thinking clearly in interviews.
"What if we moved the try-catch block inside the loop to catch errors for each item separately? How would the time complexity change?"