configMAX_SYSCALL_INTERRUPT_PRIORITY in FreeRTOS - Time & Space Complexity
We want to understand how setting configMAX_SYSCALL_INTERRUPT_PRIORITY affects the time it takes for interrupt service routines to run in FreeRTOS.
Specifically, how does this priority setting influence the execution time when interrupts occur?
Analyze the time complexity of interrupt handling with this priority setting.
// Example interrupt handler
void vExampleISR(void) {
BaseType_t xHigherPriorityTaskWoken = pdFALSE;
// Perform quick interrupt work
// Notify a task if needed
xTaskNotifyFromISR(xTaskHandle, 0, eNoAction, &xHigherPriorityTaskWoken);
portYIELD_FROM_ISR(xHigherPriorityTaskWoken);
}
// configMAX_SYSCALL_INTERRUPT_PRIORITY defines max ISR priority that can call FreeRTOS APIs
#define configMAX_SYSCALL_INTERRUPT_PRIORITY (5 << (8 - configPRIO_BITS))
This code shows an interrupt service routine (ISR) that uses FreeRTOS API calls allowed only if the ISR priority is at or below configMAX_SYSCALL_INTERRUPT_PRIORITY.
Look at what repeats when interrupts happen.
- Primary operation: The interrupt handler runs each time an interrupt occurs.
- How many times: Depends on how often the interrupt triggers, which can vary widely.
Execution time grows with how often interrupts happen and their priority.
| Interrupt Frequency (n) | Approx. Operations |
|---|---|
| 10 per second | 10 ISR runs |
| 100 per second | 100 ISR runs |
| 1000 per second | 1000 ISR runs |
Pattern observation: More frequent interrupts mean more ISR executions, so total work grows linearly with interrupt count.
Time Complexity: O(n)
This means the total time spent handling interrupts grows directly with how many interrupts occur.
[X] Wrong: "Setting a higher configMAX_SYSCALL_INTERRUPT_PRIORITY makes ISRs run faster."
[OK] Correct: The priority setting controls which ISRs can call FreeRTOS APIs, not how fast the ISR code itself runs. Execution time depends on ISR code length and frequency, not just priority.
Understanding how interrupt priority affects system responsiveness and timing is a key skill in embedded programming. It shows you can reason about how system settings impact performance.
What if we changed configMAX_SYSCALL_INTERRUPT_PRIORITY to a lower value? How would that affect the time complexity of interrupt handling?