Enqueue Using Linked List in DSA C - Time & Space Complexity
We want to understand how the time to add an item to a linked list queue grows as the queue gets bigger.
How does the enqueue operation's time change when the queue size increases?
Analyze the time complexity of the following code snippet.
#include <stdlib.h>
struct Node {
int data;
struct Node* next;
};
void enqueue(struct Node** front, struct Node** rear, int value) {
struct Node* newNode = malloc(sizeof(struct Node));
newNode->data = value;
newNode->next = NULL;
if (*rear == NULL) {
*front = *rear = newNode;
return;
}
(*rear)->next = newNode;
*rear = newNode;
}
This code adds a new element to the end of a linked list queue, updating front and rear pointers.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Creating a new node and linking it at the end.
- How many times: This happens once per enqueue call, no loops or recursion inside.
Each enqueue adds one node by directly linking at the rear pointer without searching.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 1 main linking operation |
| 100 | About 1 main linking operation |
| 1000 | About 1 main linking operation |
Pattern observation: The time stays roughly the same no matter how big the queue is.
Time Complexity: O(1)
This means adding an item takes the same short time regardless of queue size.
[X] Wrong: "Enqueue takes longer as the queue grows because we must find the end each time."
[OK] Correct: We keep a rear pointer, so we add directly without searching, keeping time constant.
Knowing how to keep enqueue fast by using a rear pointer shows you understand efficient queue design, a key skill in many coding tasks.
"What if we did not keep a rear pointer and had to find the end each time? How would the time complexity change?"
