Negative indexing for exclusion in R Programming - Time & Space Complexity
We want to understand how the time it takes to exclude elements using negative indexing changes as the size of the data grows.
How does the program's work increase when we remove items by negative indexing?
Analyze the time complexity of the following code snippet.
# Create a vector of numbers from 1 to n
n <- 1000
vec <- 1:n
# Exclude elements at positions 2 and 5
result <- vec[-c(2, 5)]
This code creates a vector and then removes elements at specific positions using negative indexing.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Copying all elements except those excluded into a new vector.
- How many times: Each element is checked once to decide if it should be included.
As the vector size grows, the program must look at each element to decide if it should be kept or removed.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks |
| 100 | About 100 checks |
| 1000 | About 1000 checks |
Pattern observation: The work grows roughly in direct proportion to the size of the vector.
Time Complexity: O(n)
This means the time to exclude elements grows linearly with the number of elements in the vector.
[X] Wrong: "Removing a few elements with negative indexing is instant and does not depend on vector size."
[OK] Correct: Even if only a few elements are removed, the program still checks every element to build the new vector, so time grows with vector size.
Understanding how data exclusion scales helps you write efficient code and explain your choices clearly in real projects or interviews.
What if we excluded elements using a logical vector instead of negative indexing? How would the time complexity change?