Switch statement in Java - Time & Space Complexity
We want to see how the time it takes to run a switch statement changes as the input changes.
Specifically, we ask: How does the time grow when we add more cases?
Analyze the time complexity of the following code snippet.
int day = 3;
switch(day) {
case 1: System.out.println("Monday"); break;
case 2: System.out.println("Tuesday"); break;
case 3: System.out.println("Wednesday"); break;
case 4: System.out.println("Thursday"); break;
default: System.out.println("Other day");
}
This code prints the name of the day based on the number given.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Table lookup or direct jump to the matching case.
- How many times: Constant time (O(1)), typically a single array access or hash lookup.
In Java, the compiler optimizes switch statements on primitives (like int) using a jump table (dense cases) or partial table/hashes (sparse), keeping execution constant.
| Input Size (number of cases) | Approx. Operations |
|---|---|
| 5 | 1 lookup |
| 50 | 1 lookup |
| 500 | 1 lookup |
Pattern observation: The number of operations stays constant regardless of cases.
Time Complexity: O(1)
This means constant time to execute, independent of case count, due to compiler optimizations like jump tables.
[X] Wrong: "A switch statement checks cases one by one like if-else, so O(n)."
[OK] Correct: Java compiles switches to efficient table-driven code (jump tables for dense ints), not linear scans.
Knowing switch optimizations shows deep JVM/compiler knowledge and ability to distinguish high-level code from low-level execution.
"What if cases are very sparse (e.g., 1, 1000, 10000)? How does Java handle it? (Hint: if-else chain or hashed switch)"