Logical (boolean) type in R Programming - Time & Space Complexity
We want to understand how the time to work with logical (boolean) values changes as we handle more data.
How does the program's speed change when it processes many TRUE or FALSE values?
Analyze the time complexity of the following code snippet.
# Create a logical vector of length n
n <- 1000
logical_vec <- rep(TRUE, n)
# Count how many TRUE values
count_true <- sum(logical_vec)
# Check if any value is FALSE
any_false <- any(!logical_vec)
This code creates a list of TRUE values, counts how many are TRUE, and checks if any are FALSE.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Going through each element in the logical vector to count or check values.
- How many times: Once for each element, so n times where n is the vector length.
As the number of logical values grows, the time to count or check them grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 checks |
| 100 | 100 checks |
| 1000 | 1000 checks |
Pattern observation: The work grows directly with the number of elements. Double the elements, double the work.
Time Complexity: O(n)
This means the time to process logical values grows in a straight line with the number of values.
[X] Wrong: "Counting TRUE values takes the same time no matter how many values there are."
[OK] Correct: The program must look at each value to count it, so more values mean more time.
Understanding how simple checks on logical data scale helps you explain efficiency clearly and shows you know how programs handle data step-by-step.
"What if we used a function that stops checking as soon as it finds a FALSE? How would the time complexity change?"