Lecture 17 – Introduction to hardware (Digital Systems)
Our aim this term will be to show one way in which it is possible to build from transistors a computer capable of executing the kind of machine-language programs we wrote last term. It's important to realise that we will study only one way of doing each task, and that way will often not be the best.
- We will use transistors to build logic gates.
- We will stick to a design principle called CMOS, where each gate has two complementary networks of transistors, one to drive the output high when necessary, and the other to drive it low the rest of the time. We ignore clever implementations of some logic gates that use fewer transistors but do not follow this pattern.
- We will use logic gates and latches (also built from transistors) to design functional units, including decoders, multiplexers, registers, and adders.
- We will observe that a small selection of gate types is sufficient to implement any Boolean function, but we won't concern ourselves with methods for designing the smallest circuit that implements a specified function.
- Likewise, after observing that logic-plus-latches is sufficient to implement any finite state machine, we won't be concerned with the routines and rituals needed to design optimal implementations of specified machines, but will be content to live on our wits a bit in designing the circuit elements we need.
- Though we will show that each functional unit can be built from gates and latches, we'll largely ignore other implementations that are smaller and faster. For example, there is an implementation of ROMs that uses only one diode (or one transistor) per bit, and we mention it only in passing.
- We will use functional units to design a simple 'single-cycle' implementation of a Thumb subset.
- Because each instruction is executed in a single clock cycle, we won't attempt to implement those instructions, such as multi-register
push
andpop
, that clearly require more than one cycle of activity. - And because the whole of the execution of an instruction happens within a clock cycle, the implication will be that a clock cycle for our design will be rather long, so the clock frequency will have to be low. More practical designs would overlap the execution of one instruction with fetching and decoding the next one, in a scheme called pipelining. We will not have time to go into that.
- Because each instruction is executed in a single clock cycle, we won't attempt to implement those instructions, such as multi-register
Logic gates
A combinational circuit has inputs and outputs that are Boolean values, drawn from {0, 1} = {false, true}; the outputs depend only on the current values of the inputs. (Contrast this with sequential circuits, where the outputs also depend on inputs in the past, and clairvoyant circuits, where the outputs depend on inputs that will arrive in the future.) In electronic logic, we represent 0 with a voltage close to ground, and 1 with a voltage close to the positive supply rail.
A logic gate is a combinational element that computes a single (simple) function of its inputs.
For example, an AND gate has z = a ∧ b: the behaviour can be spelled out with a truth table, giving the output for each possible combination of inputs.
AND | |||
---|---|---|---|
a | b | z | |
0 | 0 | 0 | |
0 | 1 | 0 | |
1 | 0 | 0 | |
1 | 1 | 1 |
As we'll see later, logic gates like this are simple enough that we can implement them with a handfull of transistors. We can connect gates together to compute more elaborate functions. For example, consider this assembly of AND gates:
We can deduce the behaviour of this circuit by drawing up a truth table, listing the 16 possibilities for the four inputs, and for each of them showing the intermediate signals x
and y
and the output z
.
a | b | c | d | x | y | z | |
---|---|---|---|---|---|---|---|
0 | 0 | 0 | 0 | 0 | 0 | 0 | |
0 | 0 | 0 | 1 | 0 | 0 | 0 | |
0 | 0 | 1 | 0 | 0 | 0 | 0 | |
0 | 0 | 1 | 1 | 0 | 1 | 0 | |
0 | 1 | 0 | 0 | 0 | 0 | 0 | |
0 | 1 | 0 | 1 | 0 | 0 | 0 | |
0 | 1 | 1 | 0 | 0 | 0 | 0 | |
0 | 1 | 1 | 1 | 0 | 1 | 0 | |
1 | 0 | 0 | 0 | 0 | 0 | 0 | |
1 | 0 | 0 | 1 | 0 | 0 | 0 | |
1 | 0 | 1 | 0 | 0 | 0 | 0 | |
1 | 0 | 1 | 1 | 0 | 1 | 0 | |
1 | 1 | 0 | 0 | 1 | 0 | 0 | |
1 | 1 | 0 | 1 | 1 | 0 | 0 | |
1 | 1 | 1 | 0 | 1 | 0 | 0 | |
1 | 1 | 1 | 1 | 1 | 1 | 1 |
It's only if all four inputs are 1 that the output is 1: we have made 4-input AND gate from three 2-input gates. The behaviour of any acyclic assembly of gates can be deduced in a similar way.
Propagation delay
What about this circuit?
We can check – by reasoning or another truth table – that it also functions as a 4-input AND gate. But it performs less well. After the inputs of a gate become stable, it takes some time before the output also takes on a stable value. These delays add up along paths through the circuit, and it is the longest path that matters in determining the overall delay. So we prefer circuits with a small depth, so that all the paths are short.
Other gates
AND gates on their own can't do much, but there are other types we can use. Also shown in the picture above are an OR gate, z = a ∨ b, with this truth table:
OR | |||
---|---|---|---|
a | b | z | |
0 | 0 | 0 | |
0 | 1 | 1 | |
1 | 0 | 1 | |
1 | 1 | 1 |
An inverter (NOT gate) outputs the opposite of its single input, z = ¬a, with the truth table:
NOT | ||
---|---|---|
a | z | |
0 | 1 | |
1 | 0 |
We can connect such gates together in any acyclic arrangement we choose, and each such circuit computes a Boolean function that we can also express as an algebraic formula in terms of the inputs. For example, this circuit computes the function
- z = ~((a ∧ b) ∨ c).
As a truth table,
a | b | c | d | e | z | ||
---|---|---|---|---|---|---|---|
0 | 0 | 0 | 0 | 0 | 1 | ||
0 | 0 | 1 | 0 | 1 | 0 | ||
0 | 1 | 0 | 0 | 0 | 1 | ||
0 | 1 | 1 | 0 | 1 | 0 | ||
1 | 0 | 0 | 0 | 0 | 1 | ||
1 | 0 | 1 | 0 | 1 | 0 | ||
1 | 1 | 0 | 1 | 1 | 0 | ||
1 | 1 | 1 | 1 | 1 | 0 |
Adequacy
It turns out that AND, OR and NOT are sufficient to implement any Boolean function. I will demonstrate this with an example, and provide a separate note with a general proof – perhaps better digested after you have made some progress in Formal Proof. So let's take f(a, b, c) to be this 'majority' function (which we will find a use for later in the course):
MAJ | ||||
---|---|---|---|---|
a | b | c | f | |
0 | 0 | 0 | 0 | |
0 | 0 | 1 | 0 | |
0 | 1 | 0 | 0 | |
0 | 1 | 1 | 1 | |
1 | 0 | 0 | 0 | |
1 | 0 | 1 | 1 | |
1 | 1 | 0 | 1 | |
1 | 1 | 1 | 1 |
There are four lines in the truth table where the function takes value 1, and we can write a 'product term' for each of these lines that is true on that line and false everywhere else.
- p = ¬a ∧ b ∧ c
- q = a ∧ ¬b ∧ c
- r = a ∧ b ∧ ¬c
- s = a ∧ b ∧ c.
a | b | c | p | q | r | s | f | ||
---|---|---|---|---|---|---|---|---|---|
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ||
0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | ||
0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | ||
0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | ||
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ||
1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | ||
1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | ||
1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 |
We can get a formula for f by taking the disjunction (sum) of these four terms.
f = (¬a ∧ b ∧ c) ∨ (a ∧ ¬b ∧ c) ∨ (a ∧ b ∧ ¬c) ∨ (a ∧ b ∧ c).
That formula agrees with f on every line of the truth table: on lines where f is 1, exactly one of the terms is true – namely the term that was written for that line, so when we OR all the terms together, we get 1. On lines in f's truth table where f is false, all of the terms are false, so ORing them together also gives false.
It should be evident that we could go through a similar procedure for any desired function f, writing a Boolean formula that agrees with f for all inputs. Given such a formula, we can draw a circuit containing AND, OR and NOT gates in a routine way, such as this circuit for the majority function, using four 3-input AND gates and a 4-input OR gate:
This procedure gives a circuit that implements any desired Boolean function, but often there is a simpler, equivalent formula, leading to a simpler circuit. In that case of the majority function, we find
f = (a ∧ b) ∨ (a ∧ c) ∨ (b ∧ c).
To verify that this formula is also correct, we can make a truth table for it, observing that each of the three terms is true only if f is true also, and that together they cover all the situations in which f is true. Alternatively, we can simplify the original expression algebraically to get the same result.
f = (¬a ∧ b ∧ c) ∨ (a ∧ b ∧ c)
∨ (a ∧ ¬b ∧ c) ∨ (a ∧ b ∧ c)
∨ (a ∧ b ∧ ¬c) ∨ (a ∧ b ∧ c)
= ((¬a ∨ a) ∧ b ∧ c)
∨ (a ∧ (¬b ∨ b) ∧ c)
∨ (a ∧ b ∧ (¬c ∨ c))
= (b ∧ c) ∨ (a ∧ c) ∨ (a ∧ b)
There are routine, manual procedures that take a Boolean function with a small number of variables and find its minimal expression as a 'sum of products', but we don't have time to study them in this course. Such procedures are hard to use if there are more than 4 inputs to the circuit, and are in any case rendered irrelevant by computer methods that you can study in the course Computer-Aided Formal Verification. Comforting though rituals are, in this course it will be enough for us to rely on insight, enthusiasm and good luck.
Added in the lecture
1. Multiplexer: a combinational circuit implementing z = (¬c ? a : b) by
- z = (¬c ∧ a) ∨ (c ∧ b)
2. Input combinations as vertices of a (hyper)cube with terms as edges or faces. Finding a covering set of prime implicants for the majority example using the cube, and using a Karnaugh map. (Not examinable.)