Lecture 12 – Introducing Phōs (Digital Systems)

From Spivey's Corner
Jump to: navigation, search

Our best attempt so far at a prime-printing program uses interrupts to overlap the search for more primes with printing the primes it has found, but when the buffer of output characters is full, it still falls back on the tactic of waiting in a loop for space to become available. If the program is producing primes at a faster rate than the UART can print them, then there's really nothing else we can do.

But perhaps there are other tasks that the machine needs to do – such as showing something on the display at the same time as computing primes. If that's so, then we will want to move away from a structure where there is a single main program and a collection of interrupt handlers that are called from time to time. Unless the other functions of the program can be implemented entirely in interrupt handlers, we need more than one 'main program' to share the processor somehow.

Also: the printer driver disables interrupts in order to manipulate a data structure (the circular buffer) that is shared between the interrupt handler and the function serial_putc. It's tricky to get this right, and disabling interrupts for more than a brief moment risks delaying other things (like multiplexing the display) that ought to happen promptly.

All this means that we are ready to begin using a kind of operating system – a process scheduler. We'll be using a very simple embedded operating system that I've name Phōs, based on the idea of a fixed set of concurrent processes that communicate with each other by sending messages. The design is based on the internal structure of Andy Tanenbaum's operating system Minix, and implementation of Unix that is a predecessor of Linux.

Notes: Minix uses message-passing internally, but the interface it presents to user processes is Unix, with fork and exit, signals and pipes. Although Linux arose out of Minix, its internal structure is entirely different. All modern Intel machines run a version of Minix on an internal system-management processor, making it arguably the world's most used operating system – except possibly for the L4 microkernel that runs on the other, secret processor in your mobile phone.

There's a page with a brief programmer's manual for Phōs.


A Phōs program can contain multiple processes that all make progress as the program runs. We can imagine that the processes run simultaneously – but on a uniprocessor machine, this illusion is maintained by interleaving them: we say that they run concurrently, making progress at different rates.

For example, we could start to design a program that both shows a beating heart and prints primes on the serial port. These two activities are independent of each other, and we would like to write two routines that describe the two activities. Here's the natural way of finding and printing the primes:

static void prime_task(int arg) {
    int n = 2, count = 0;

    while (1) {
        if (prime(n)) {
            serial_printf("prime(%d) = %d\n", count, n);

And here's the natural way of writing the heart process:

static void heart_task(int arg) {
    GPIO_DIRSET = 0xfff0;


    while (1) {
        show(heart, 70);
        show(small, 10);
        show(heart, 10);
        show(small, 10);

(Note that each of these functions takes an integer parameter arg that it ignores.) The idea is that the one processor on the board will run these two programs in turn, stopping each of them when they need to wait for something and running the other one for a while.

Many operating systems multiplex the processes in a way that's based on time-slices: the processor has a clock that measures actual time, and it lets each process in turn run for 50 millisec (say) before moving on the next one. At least initially, we will do without this idea, and run each process until it voluntarily gives up the processor (this will change very soon when we consider how interrupts are handled). There's a subroutine yield() that signals to the operating system that the current process is ready to let others have a go, and (if nothing else happens) the operating system will arrange the running processes in a circle, giving each chance to run until it yields, then moving on to the next. There's no guarantee that each process gets the same fraction of the processor time, just as there's no guarantee when you pass round a bag of sweets that each person will take just one. Actually, as we'll see, the processes we write will usually run for only a short time before needing to wait for something, and at that stage we can assign the processor to a different task – so yield() will not often be needed.

We can already notice that both the processes we have written call subroutines: primes_task calls prime, and that will call a subroutine to perform integer division, and it calls serial_printf, which in turn calls a version of serial_putc. Equally, heart_task calls show, which presumably sets a pattern on the LEDs and then delays for a time. So if these two processes are going to run concurrently, one single subroutine stack at the top of memory isn't going to be enough. In fact, a vital part of the implementation of the operating system will be to provide a separate stack for each process, and switching between processes will involve resetting the stack pointer so that each process uses its own stack. The one original stack will remain in existence, and will be used by the operating system itself.

Phōs supports programs that contain a fixed number of processes, all created when the program starts and usually all running forever. The function init that, up to now, has been the 'main program' in each application, will now become very simple: it creates a number of processes then returns, and it is after init has returned that the real work begins: Phōs takes the processes that have been created and starts to run them in turn, and that is the entire work of the program. You may think of your program as having a 'main' process if you like, but Phōs doesn't, and treats all processes alike, running them when they are ready and letting them wait when they are not. here's the init function for the heart--primes application (edited):

void init(void) {
    start(SERIAL, "Serial", serial_task, 0, STACK);
    start(TIMER, "Timer", timer_task, 0, STACK);
    start(HEART, "Heart", heart_task, 0, STACK);
    start(PRIME, "Prime", prime_task, 0, STACK);

As you can see, this calls start four times to start four processes – one each for the heart and the primes activities, and also two other processes that (as we'll see later) look after the UART and a timer. Each process has a small integer id – SERIAL, TIMER, ... – that (as we'll see later) is used to identify it when sending or receiving messages. It has a name, used only for debugging, a function that's called as the body of the process, and argument (all 0 in this case) that's passed as the argument of the function, and a set amount of stack space. The constant STACK provides a decent default of 1kB for processes that don't call any deeply recursive functions. Phōs will let us measure the amount of stack space actually used by a process so that we can trim these values later.

Simplicity: there is a fixed number of processes, all created before concurrent execution begins. Scheduling is voluntary and (apart from interrupts) non-preemptive.


Bigger operating systems do a lot more than simply manage a collection of concurrent processes, valuable though that function is. Typically, an operating system will support ...
  • Processes, with a time-based scheduling scheme that supports dynamic priorities, so each process gets a fair share of the CPU in the medium term.
  • Memory management with protection, so one process cannot read or write the areas of memory assigned to another, and processes that are idle can be (partially) stored on disk to save RAM space.
  • I/O devices, so different kinds of disk present a uniform interface for programming, and programs can get ASCII characters from a keyboard rather than hardware-dependent scan codes.
  • A file system, so the fixed blocks of storage provided by the disks can be organised into larger files, arranged in a hierarchical structure of directories.
  • Networking, so client programs can be written in terms of connections between processes on different machines, not in terms of individual network packets.

We have very little of these.

And installing an operating system typically means installing a collection of utility programs, shared libraries, a GUI environment, and other things that are not properly part of the operating system itself.


So far, we've seen how to create concurrent processes that run independently of each other. Things get much more interesting if we allow the processes to communicate, so that they can work together. We've already had a hint that hardware devices will be looked after by their own driver processes like serial_task and timer_task. But let's start easily by making a pair of ordinary processes that talk to each other by sending and receiving messages. One will generate a stream of primes, and the other will format and print them – or even better, let's make the second process find out how many primes are less than 1000, 2000, ... It's no accident that the structure of this program is reminiscent of Haskell programs, like map show primes, that work with lazy streams.

Here's the code for a process that generates primes:

void prime_task(int arg) {
    int n = 2;
    message m;

    while (1) {
        if (prime(n)) {
            m.m_type = PRIME;
            m.m_i1 = n;
            send(USEPRIME, &m);

The process tests each number, starting at 2, and if a number is prime it sends a message to another process with the id USEPRIME. To send a message, a process needs a message buffer of type message. It fills in the m_type field of the message with a small integer, in this case the constant PRIME, and optionally fills in one or more data fields with additional information. In this case, we put the prime itself in the integer field m_i1 and leave other fields like m_i2 unset. Then there's a call to send, naming the recipient of the message.

Here's code for another process that will run under the id USEPRIME.

void summary_task(int arg) {
    int count = 0, limit = arg;
    message m;

    while (1) {
        receive(ANY, &m);
        assert(m.m_sender == GENPRIME && m.m_type == PRIME);

        while (m.m_i1 >= limit) {
            serial_printf("There are %d primes less than %d\n",
                          count, limit);
            limit += arg;


void init(void) {
    start(GENPRIME, "GenPrime", prime_task, 0, STACK);
    start(USEPRIME, "UsePrime", summary_task, 1000, STACK);

(For the sake of the example, I've made use of the integer argument arg to set the interval between lines of output. More commonly, the argument is used to allow multiple processes that run the same code, but behave slightly differently.)

As you can see, this receives messages from the GENPRIME process shown above. Again we need a message buffer to contain the message that we have received. We can specify in the RECEIVE call who it is must send the message, or we can write the special value ANY to allow a message from any source. (Processes that provide a service can use ANY to allow requests from any client.) When the message arrives, it is stamped by the postman with the identity of the sender (so that we could reply to the same process), and the m_type and m_i1 (etc.) fields are the same as were set by the sender. So in the example, successive messages will have m_i1 fields that are successive primes in ascending order. The process counts how many are less than each multiple of its argument, and prints a summary on the serial output.

When a process tries to receive a message from another, naturally enough it must wait until the sender is ready to send a message. But for simplicity, Phōs does a complementary thing in the other direction – when a process wants to send, it must wait until the other process wants to receive. Messages are not buffered, there is no 'mailbox' where they can sit until they are collected: instead, a message is passed from the hand of the sender directly to the hand of the receiver. If you want a different behaviour, it's possible (and easy) to program it, putting a buffer process between the sender and the receiver that can receive messages from one, store them, and pass them on to the other when it is ready for them.

If a process wants to send or receive and its counterpart is not ready, then the process cannot run any more, and the operating system picks another process to run. This explains why yield() is rarely needed: if a program contains multiple processes that are constantly communicating with each other, there are plenty of points where the scheduler can switch from one process to another. After a message has been passed from sender to receiver, both are ready to continue, and the scheduler can pick either of them, or a completely different process, to run next.

Apart from the direction of information flow, there is another asymmetry between send and receive, in that a call to send must specify which process is to receive the message, whereas a call to receive can specify ANY and allow messages from anywhere. If the message is a request for something, say the current time, then it's common to write client = m.m_sender and later send(client, &reply) to send back the result of the request.

Simplicity: messages have a fixed format for the whole system. Message passing is synchronous: if the receiver is not waiting the receive, then the sender must wait.

Lecture 13

(Universal Asynchronous Receiver/Transmitter). A peripheral interface that is able to send and receive characters serially, commonly used in the past for communication between a computer and an attached terminal. It is commonly used in duplex mode, with the transmitter of one device connected to the receiver of the other with one wire, and the receiver of the one connected to transmitter of the other with a different wire. The asynchronous part of the name refers to the fact that the transmitter and receiver on each wire do not share a common clock, but rely instead on the signalling protocol and precise timing to achieve synchronisation.

(General-Purpose Input/Output). A peripheral interface that provides direct access to pins of the microcontroller chip. Pins may be configured as inputs or outputs, and interrupts may be associated with state changes on certain input pins. On the micro:bit, the LEDs and pushbuttons are connected to GPIO pins.

A register sp that holds the address of the most recent occupied word of the subroutine stack. On ARM, as on most recent processors, the subroutine stack grows downwards, so that the sp holds the lowest address of any occupied work on the stack.