Throwing functions with throws in Swift - Time & Space Complexity
When we use throwing functions in Swift, we want to know how the time to run the code changes as the input grows.
We ask: How does the cost of running a throwing function grow with input size?
Analyze the time complexity of the following code snippet.
func findIndex(of value: Int, in array: [Int]) throws -> Int {
for (index, element) in array.enumerated() {
if element == value {
return index
}
}
throw NSError(domain: "ValueNotFound", code: 1, userInfo: nil)
}
This function searches for a value in an array and throws an error if it is not found.
- Primary operation: Looping through the array elements one by one.
- How many times: Up to the size of the array (n times) in the worst case.
As the array gets bigger, the function may check more elements before finding the value or throwing an error.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Up to 10 checks |
| 100 | Up to 100 checks |
| 1000 | Up to 1000 checks |
Pattern observation: The number of checks grows directly with the size of the array.
Time Complexity: O(n)
This means the time to run the function grows linearly with the size of the input array.
[X] Wrong: "Throwing an error makes the function slower for all inputs."
[OK] Correct: Throwing an error only happens if the value is not found, so it does not affect the time complexity for successful searches.
Understanding how throwing functions behave helps you explain error handling clearly and shows you can think about performance even when errors occur.
"What if the function searched a sorted array using binary search and still threw errors? How would the time complexity change?"