Register (multi-bit flip-flop) in Verilog - Time & Space Complexity
We want to understand how the time it takes to update a multi-bit register changes as the number of bits grows.
How does the work needed to store data scale with the size of the register?
Analyze the time complexity of the following code snippet.
module register #(parameter WIDTH = 8) (
input logic clk,
input logic [WIDTH-1:0] d,
output logic [WIDTH-1:0] q
);
always_ff @(posedge clk) begin
q <= d;
end
endmodule
This code defines a register that stores a multi-bit value on each clock pulse.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Updating each bit of the register on the clock edge.
- How many times: Once per clock cycle, for all bits in the WIDTH parameter.
As the number of bits (WIDTH) increases, the work to update the register grows proportionally.
| Input Size (WIDTH) | Approx. Operations |
|---|---|
| 10 | 10 bit updates |
| 100 | 100 bit updates |
| 1000 | 1000 bit updates |
Pattern observation: The number of operations grows linearly with the number of bits.
Time Complexity: O(n)
This means the time to update the register grows directly in proportion to the number of bits it holds.
[X] Wrong: "Updating a multi-bit register takes the same time no matter how many bits it has."
[OK] Correct: Each bit must be updated, so more bits mean more work and more time.
Understanding how register size affects update time helps you design efficient hardware and explain timing in digital circuits clearly.
"What if the register was updated only when the input changes? How would the time complexity change?"