Why always blocks are needed in Verilog - Performance Analysis
We want to understand why always blocks are used in Verilog and how their execution cost grows as designs get bigger.
What happens to the work done inside always blocks when the input size or design complexity increases?
Analyze the time complexity of this simple always block example.
always @(posedge clk) begin
for (int i = 0; i < N; i++) begin
reg_array[i] <= data_in[i];
end
end
This code copies input data into a register array on every clock pulse.
Look at what repeats inside the always block.
- Primary operation: The for-loop that copies each element from input to register.
- How many times: It runs N times every clock cycle.
As N grows, the number of copy operations grows too.
| Input Size (N) | Approx. Operations per clock |
|---|---|
| 10 | 10 copy operations |
| 100 | 100 copy operations |
| 1000 | 1000 copy operations |
Pattern observation: The work grows directly with N, so doubling N doubles the operations.
Time Complexity: O(N)
This means the work inside the always block grows linearly with the size of the input data.
[X] Wrong: "Always blocks run only once, so their size doesn't affect performance."
[OK] Correct: Always blocks run repeatedly on events like clock edges, so bigger loops inside them mean more work every time they run.
Understanding how always blocks scale helps you design efficient hardware and explain your design choices clearly in interviews.
"What if we replaced the for-loop with parallel assignments? How would that affect the time complexity inside the always block?"