Intro to Computer Systems

Chapter 11: Systems and Software

Operating Systems in Practice

The general relationship between a computer's operating system software and the underlying hardware can be (basically) summarised in the following diagram:

The interconnections between operating system software and computer hardware.
The interconnections between operating system software and computer hardware.

While all operating systems provide these functions, different ways to structure the O/S have been tried.

Operating System Structure Considerations

There are several major influences on the design of an operating system's internal structure:

Microsoft's Windows operating system is something of an extreme case in backwards compatibility, which led to the inevitable serious maintainability issues with its legacy "Win32" API.

Raymond Chen, a senior Microsoft software engineer, wrote a blog and a book about the reality (and occasional absurdity) of guaranteeing backwards compatibility across decades of software evolution.

The Inevitable Tradeoffs...

Operating system design involve tradeoffs between competing requirements; for example, a micro-kernel (discussed soon) can be demonstrated to be a superior architecture for a variety of reasons:

However, the structure has a price in the number of transitions between user mode and kernel mode - this is a significant overhead and performance penalty.

Another example of a trade-off is in Microsoft's Windows 3.1. This operating system (like many before and after it) had a variety of internal and external function interfaces. In theory, each module only called the "official" interfaces of all of the other modules. But, for performance reasons, there were a lot of places where internal, unpublished, interfaces were implemented.

This broke down the encapsulation and made the code much more difficult to maintain - but the code did run faster. The resulting tradeoff for this performance was a significant decrease in maintainability, as these unofficial APIs were eventually discovered and used by other application vendors in search of extra performance.

Operating System Architectures

Monolithic Architecture

The monolithic structure is what it implies - a single discrete code module that contains the entire operating system functionality. At its worst it is basically just one big “lump”, where levels of functionality are not well separated and no clearly-defined internal interfaces.

There is no encapsulation of functionality or data, and this makes it difficult to debug, extend or maintain. An example of such a "bad" monolithic operating system is MS-DOS.

In reality MS-DOS wasn't completely monolithic, as the DOS kernel was split into two pieces (called io.sys and msdos.sys), and extensible. Nearly every virus that attacked DOS took advantage of that extensibility.

The root cause of that type of vulnerability was that there was no protection system built in; the ability to extend the kernel was openly available to anything.

A best case monolithic structure has clearly defined interfaces, with a good encapsulation between individual sub-systems. This is implemented through modular code design; however as the compiled code is still one large executable file it is still considered monolithic. (This "one large executable", however, means that changing a portion of the base OS means the entire kernel needs to be recompiled.) Most Unix or Linux kernels were built on this design.

A diagram of the Linux kernel.
A diagram of the Linux kernel.

Early Linux kernels were mostly monolithic, but contemporary development has been modular.

Modules are sections of kernel code that can be compiled, loaded, and unloaded independent of the rest of the kernel. This is typically used to implement a device driver, networking protocol, and such.

Commercial Unix in the mid-1980’s was distributed with pre-compiled driver modules, and users used the linker to install modules they needed for their hardware.

but... doesn’t Linux use modules? The diagram shows it...

Yes, but the modules are actually inserted into the kernel code that is running in kernel mode.

One possibly wouldn't draw the modules below the kernel, but they are there in this diagram since the most common use-case for modules is for device drivers; being the primary hardware-software interface, one could argue these belong below the kernel.

The key point about Linux modules is that they provide support for hardware that is not universally present, or for particular protocols that are not universally required. Making this code a module means that it is only loaded if it is required. This reduces the memory footprint of the kernel.

Layered Structure

The layered structure is an exact implementation of the simplified diagrams of operating systems you often see (and were in earlier subtopics of this chapter). The operating system itself is divided into layers, where each layer implements functions based only on the layer below it.


The micro-kernel structure takes aspects of both approaches into account:

The security implications of this are interesting – since, for example, I/O is happening in user space there is less danger of a security breach, but there is a tradeoff:

An I/O request will need to go from the user process in user space to the kernel in kernel space – then from kernel space to the I/O code in user space. And the completion message will need to come back over the same path.

This design is brilliantly clean on a theoretical level, but when it comes to actually implementing it there are problems. Something basic like a call from a user program to a device requires the following:

There is a performance penalty for each transition between kernel and user.

The management of different devices and services is clearly separated into different modules

There is a potential advantage in the hiding and encapsulation that can be implemented in a modular O/S – but the tradeoff is often speed. Modern O/S designs that use micro-kernels often simply throw hardware at the problem.

More dangerously, what can happen is that developers circumvent the encapsulation and go directly to functions and structures that are not intended for outside consumption – In the long run this will compromise stability.

Actual System Architectures

Apple's MacOS X

OS X uses a kernel based on the Mach micro-kernel from Carnegie-Mellon, and FreeBSD.

Apple MacOS X architecture diagram.
Apple MacOS X architecture diagram.

The Mach kernel provides:

The BSD portion of the system provides:

The I/O kit (in the Mach kernel) is designed to provide a device-independent interface to higher levels of the system. This low-level I/O support, enables:

There is no question that OS X is using the Mach microkernel, but a lot of the structure runs in protected kernel mode. Microkernel theory states that very little should be in kernel mode and most should run in user mode.

So, is OS X a “true” micro-kernel?.

What might be the reasons that it is or is not?

Windows NT

Windows NT (the underlying structure behind modern Windows) claims to have a micro-kernel structure...

Microsoft Windows NT architecture diagram.
Microsoft Windows NT architecture diagram.

...but like OS X it has alot more is in kernel mode than a "true" micro-kernel. For example, the Win32 GDI or graphics driver was put into kernel mode along with the various managers to solve performance problems.


Google's Android mobile OS is based on Linux, with many of the Android-specific features built atop it. In this situation, Linux provides an abstraction of the underlying hardware, leaving the unique Android portion of the OS relatively hardware independent: although most Android hardware uses processors based on the ARM architecture, some products use low-power x86 CPUs.

Google Android architecture diagram.
Google Android architecture diagram.

The Linux Kernel is based on a long-term support development branch of the Linux kernel (currently version 3.4), and this provides the primitives such as security, memory management, network and process management, and such. It implements a multi-user system where each application has a unique user ID and a separate process.

The Libraries include the display subsystem (for 2D and 3D graphics), working with various audio and video formats, a basic database and HTML rendering engine, and other functionalities. These libraries are coded in C or C++, compiled for the native machine, and are exposed to developers through the application framework.

The Android Runtime is known as the Dalvik Virtual Machine, which is a Java-like virtual machine which has been optimised for mobile devices, with features such as just-in-time compilation. Each application runs its own instance of the Dalvik VM.

The Application Framework is a set of Java services and systems which expose system libraries to developers that develop the Applications. Unlike the libraries, both the application framework and end-user applications are written in Java and sit atop the virtual machine for platform independence.