Lecture 19 – Sequential logic (Digital Systems)
Sequential circuits
In contrast to combinational circuits, sequential circuits have outputs that depend on the inputs from previous times in the history of the circuit. To construct such circuits, we must introduce a new element with 'memory': a (positive edge-triggered) D-type flip-flop.
The flip-flop has two inputs, d and clk, and one or two outputs, one called q, and perhaps (for convenience) another equal to ¬q.
We will concentrate on fully synchronous designs where all the clock inputs of flip-flops are connected to a single, regular, global clock signal – perhaps derived from a quartz crystal. The little triangle in the circuit symbol means that the state of the flip-flop can change only when the clock signal makes a transition from 0 to 1 – a positive-going clock edge.
Because signals can now change over time, each wire will now carry a sequence of Boolean values ⟨a0, a1, a2, ...⟩. Combinational gates compute the same function in each clock period: for an AND gate, zt = at ∧ bt.
The D-type always produces as its output the value that its input had during the previous clock period
- zt+1 = at.
We could spell out this behaviour in a kind of truth table, though it is a bit boring.
a | zt | zt+1 | |
---|---|---|---|
0 | 0 | 0 | |
0 | 1 | 0 | |
1 | 0 | 1 | |
1 | 1 | 1 |
As you can see, the next state is always the same as the input, whatever the current state might be. (Other kinds of flip-flop exist where the next state depends on the current state as well as the input – they used to be popular with SSI logic, because they sometimes lead to simpler designs.)
We will assume the initial condition z0 = 0. Most flip-flops have an 'asynchronous reset' input that can be used to ensure this.
If you search for datasheets of SSI chips, you can find equivalent circuits for flip-flops in terms of gates: but you shouldn't take them too seriously, because the dynamic behaviour of a flip-flop over time cannot be deduced from the static behaviour of a collection of logic gates.
Example: parity detector
Before we look at the problem of specifying and designing sequential circuits, we should work out the behaviour of a given circuit.
In this circuit, the signals satisfy
- xt = at ⊕ zt
zt+1 = xt
z0 = 0
We can prove by induction on t that
- zt = a0 ⊕ a1 ⊕ ... ⊕ at-1.
So the output is the parity of the input signals received so far.
Example: pulse shaper
Let's design a circuit that shapes an input signal into neat pulses, each lasting one clock cycle.
Note that
- long input pulses become short output pulses, synchronised to the clock.
- short pulses are lengthened to the clock period.
- output pulses can occur on alternate clock cycles.
- very short input pulses are ignored if they do not straddle a clock edge.
This circuit can be used to clean up the signal from a mechanical switch so as to remove contact bounce.
Some thought reveals that we can produce the output signal if we know the value of the input at the previous two clock edges. So if we arrange that always xt = at−1 and yt = at−2, then the output is given by
- z = x ∧ ¬y
We can arrange for x and y to be the signals we want by arranging two D-types, one with D = a and Q = x, and another with D = x and Q = y.
We can think of the pulse shaper as a machine for running the following program, where the pause
represents a delay until the next clock edge, and the two assignments y = x; x = a;
happen simultaneously and instantaneously.
x = y = 0; forever { z = x && !y; pause; y = x; x = a; }
We could emphasise the simultaneous nature of the updates by extending C a bit (more) and writing a simultaneous assignment x, y = a, x;
– a form that (e.g.) Python allows.
Clock speed
How fast can we make the clock? It all depends on the critical path – that is, the longest combinational delay in the circuit.
Delays accumulate along each combinational path. In the diagram, time t1 is the propagation delay of the first flip-flop; times t2 and t3 are the propagation delays of the two gates, and time t4 is the setup time of the second flip-flop. For the circuit to work correctly we must set the clock period T so that T ≥ t1 + t2 + t3 + t4, and the same for all combinational paths.
Another timing calculation, involving contamination delays, is needed to put a bound on the amount of clock skew the circuit can tolerate.
Example: Bathroom light switch
Let's design a digital implementation of the kind of pull-cord light switch that is often found in British bathrooms. The first time you pull the cord, the light comes on. It stays on when you release the cord, but goes off when the cord is pulled for a second time. The switch has one input and one output: the output changes state just once for each pulse on the input, however long. In our digital circuit, the light will switch on the next clock edge following the change in input.
It's tempting to think that this circuit has just two states – on and off – but that's not so. The arrows on the timing diagram mark two clock edges where the light is on and the input a is 1, but in one case the light stays on, and in the other it goes off. (If you've used a switch like this, you know there are two clunks – two changes of state – for each pull of the cord.)
We could try drawing a diagram of states, but we can also see that the following program does the right thing.
x = y = 0; forever { z = y; pause; if (a) y = !x; else x = y; }
Initially, x
and y
are both zero. When you pull the cord, y
becomes one, but x
stays at zero, and this remains true however long the cord is held. When the cord is released, x
is set to one also, without any change in the output. Pulling the cord for a second time does just the same thing, but with the roles of 0 and 1 swapped.
Let's rewrite the loop body so that the state change becomes a single simultaneous assignment of the form
x, y = f(x, y, a), g(x, y, a);
since that is all that makes sense if we want to implement the program using fully synchronous hardware. The meaning of this assignment is that both f(x, y, a)
and g(x, y, a)
are evaluated, and the values are simultaneously assigned to x
and y
respectively. For this we need the fact that assigning the existing value of x
to itself is the same as not assigning at all.
forever { z = y; pause; x, y = (a ? x : y), (a ? !x : y); }
The conditional expressions (a ? x : y)
, meaning "if a then x otherwise y", can be implemented in hardware with multiplexers. For simplicity, we show two separate 1-bit multiplexers, though they could share some common circuitry.
Questions
What is the semantics of your C-like hardware description language?
Semantics? You are taking it too seriously: it's just intended as a prompt to think about the behaviour of a sequential circuit as embodying a program.
But to answer the question, the most general form for a program that does make sense as a sequential circuit (with input a
, state bits x
and y
, and output z
) is this:
x = y = 0; forever { z = h(x, y); pause; x, y = f(x, y, a), g(x, y, a); }
That contains an assignment that sets the output as a function of the values of the state variables, and following each clock edge a simultaneous assignment that sets new values for x
and y
based on their old values and the input. This denotes a Moore machine in which the next-state function is <f, g>
where
<f, g>(x, y, a) = (f(x, y, a), g(x, y, a))
and the output function is h
. The "forever/pause" loop (no nesting allowed) means "at each clock edge".
Other "programs" such as this:
x = y = 0; forever { z = x; pause; if (a) x = !y; else y = x; }
are to be re-construed in the simultaneous-assignment form shown above – in this case, by writing
x, y = (a ? !y : x), (a ? y : x);
so that if a
is 1, then x
gets set to !y
and y
is unchanged in each clock cycle, and if a
is 0, the x
is unchanged and y
gets set to x
.
And we might design a shift register with y = x; x = a;
meaning x, y = a, x;
Anything that helps as a stepping stone towards the equivalent simultaneous assignment is encouraged.
If programs in this form help us to think of the behaviour as a program and reduce it to a simultaneous assignment, then they have served their purpose.
Aren't sequential circuits equivalent to finite-state automata?
Yes, they are: a sequential circuit containing n flip-flops has a set of states with 2n elements (not all of them necessarily accessible), and the combinational logic that computes the next state from the current state and inputs amounts to the transition function of what the Models of Computation course calls a deterministic finite automaton (DFA). Conversely, given any DFA with at most 2n states, we can number the states with binary numerals of n bits each, and the transition function of the automaton then becomes n boolean functions for the next value of each bit, which we know can be implemented in combinational logic.
This gives a systematic but somewhat soul-destroying way of designing a sequential circuit to solve a problem: we first solve the problem with a DFA, then laboriously tabulate all the state transitions that the DFA can make, choose a binary numbering of its states, and design combinational logic to implement the next-state function. The size and regularity of the combinational logic will depend on the numbering of states that is chosen in a way that is imponderable unless we can exploit a lucky insight. This approach tends to minimise the number of bits of state that are used, sometimes at the expense of seeing the problem in a regular or symmetric way, so that it can happen that the smaller number of state bits comes at the expense of vastly more complicated and obscure combinational logic.
Our approach will be different: we will rarely have to design a sequential circuit for an arbitrary finite automaton, and we need not care about minimising the number of states if using a few more states makes the circuit easier to understand. Thus, we might design a pulse-shaping circuit by observing that the output depends in a simple way on the last three (say) bits of input, and design a circuit that saves them in three flip-flops configured as a shift register. It matters little that there is another solution that uses only two flip-flops to distinguish four different states but has combinational logic that must be designed by calculation rather than insight.