Kernel (computer science)

In computer science, the kernel is the fundamental part of an operating system. It is a piece of software responsible for providing secure access to the machine's hardware to various computer programs. Since there are many programs, and access to the hardware is limited, the kernel is also responsible for deciding when and how long a program should be able to make use of a piece of hardware, which is called multiplexing. Accessing the hardware directly can be very complex, so kernels usually implement some hardware abstractions to hide complexity and provide a clean and uniform interface to the underlying hardware, which helps application programmers.



An operating system kernel is not strictly needed to run a computer. Programs can be directly loaded and executed on the "bare metal" machine, provided that the authors of those programs are willing to do without any hardware abstraction or operating system support. This was the normal operating method of many early computers, which were reset and reloaded between the running of different programs. Eventually, small ancillary programs such as program loaders and debuggers were typically left in-core between runs, or loaded from read-only memory. As these were developed, they formed the basis of what became early operating system kernels.

There are four broad categories of kernels :

  • Monolithic kernels provide rich and powerful abstractions of the underlying hardware.
  • Microkernels provide a small set of simple hardware abstractions and use applications called servers to provide more functionality.
  • Hybrid (modified microkernels) are much like pure microkernels, except that they include some additional code in kernelspace to increase performance.
  • Exokernels provide minimal abstractions but allow the use of library operating sytems to provide more functionality via direct or nearly direct access to hardware.

Monolithic kernels

The monolithic approach defines a high-level virtual interface over the hardware, with a set of primitives or system calls to implement operating system services such as process management, concurrency, and memory management in several modules that run in supervisor mode.

Even if every module servicing these operations is separate from the whole, the code integration is very tight and difficult to do correctly, and, since all the modules run in the same address space, a bug in one module can bring down the whole system. However, when the implementation is complete and trustworthy, the tight internal integration of components allows the low-level features of the underlying system to be effectively exploited, making a good monolithic kernel highly efficient. Proponents of the monolithic kernel approach make the case that if code is incorrect, it does not belong in a kernel, and if it is, there is little advantage in the microkernel approach.

More modern monolithic kernels such as Linux, FreeBSD kernel, and Windows NT can load executable modules at runtime, allowing easy extension of the kernel's capabilities as required, while helping to keep the amount of code running in kernelspace to a minimum.

Missing image
Graphical overview of a monoltihic kernel

Monolithic kernels include:

  • Traditional UNIX kernels, such as the kernels of the BSDs
  • Linux kernel
  • Windows NT kernel (though see the section on hybrid kernels below)
  • Some educational kernels, such as Agnix


The microkernel approach consists in defining a very simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as thread management, address spaces and interprocess communication. All other services, those normally provided by the kernel such as networking, are implemented in user-space programs referred to as servers.

Servers are programs like any others, allowing the operating system to be modified simply by starting and stopping programs. For a small machine without networking support, for instance, the networking server simply isn't started. Under a traditional system this would require the kernel to be recompiled, something well beyond the capabilities of the average end-user. In theory the system is also more stable, because a failing server simply stops a single program, rather than causing the kernel itself to crash.

However, part of the system state is lost with the failing server, and it is generally difficult to continue execution of applications, or even of other servers with a fresh copy. For example, if a (theoretic) server responsible for TCP/IP connections is restarted, applications could be told the connection was "lost" and reconnect, going through the new instance of the server. However, other system objects, like files, do not have these convenient semantics, are supposed to be reliable, not become unavailable randomly and keep all the information written to them previously. So, database techniques like transactions, replication and checkpointing need to be used between servers in order to preserve essential state across single server restarts.

Microkernels generally underperform traditional designs, sometimes dramatically. This is due in large part to the overhead of moving in and out of the kernel, a context switch, in order to move data between the various applications and servers. It was originally believed that careful tuning could reduce this overhead dramatically, but by the mid-90s most researchers had given up. In more recent times newer microkernels, designed for performance first, have addressed these problems to a very large degree. Nevertheless the market for existing operating systems is so entrenched that little work continues on microkernel design.

Missing image
Graphical overview of a microkernel

Examples of microkernels and OSs based on microkernels:

Monolithic kernels vs. microkernels

In the early 1990s, monolithic kernels were considered obsolete. The design of Linux as a monolithic kernel rather than a microkernel was the topic of a famous flame war (or what then passed for flaming) between Linus Torvalds and Andrew Tanenbaum. [1] ( [2] (

There is merit in both sides of the arguments presented in the Tanenbaum/Torvalds debate.

Monolithic kernels tend to be easier to design correctly, and therefore may grow more quickly than a microkernel-based system. There are success stories in both camps. Microkernels are often used in embedded robotic or medical computers because most of the OS components reside in their own private, protected memory space. This is impossible with monolithic kernels, even with modern module-loading ones.

Although Mach is the best-known general-purpose microkernel, several other microkernels have been developed with more specific aims. L3 was created to demonstrate that microkernels are not necessarily slow. L4 is a successor to L3 and a popular implementation called Fiasco is able to run Linux next to other L4 processes in separate address spaces. There are screenshots available on showing this feat. A newer version called Pistachio also has this capability.

QNX is an operating system that has been around since the early 1980s and has a very minimalistic microkernel design. This system has been far more successful than Mach in achieving the goals of the microkernel paradigm. It is used in situations where software is not allowed to fail. This includes the robotic arms on the space shuttle to machines that grind glass where a tiny mistake may cost hundreds of thousands of dollars.

Many believe that since Mach basically failed to address all issues that microkernels were meant to solve, that all microkernel technology is useless. Mach enthusiasts state that this is a closed-minded attitude which has become popular enough that people simply accept it as truth.

Hybrid kernels (aka modified microkernels)

Hybrid kernels are essentially microkernels that have some "non-essential" code in kernel-space in order for that code to run more quickly than it would were it to be in user-space. This was a compromise struck early on in the adoption of microkernel based architectures by various operating system developers before it was shown that pure microkernels could indeed be high performers.

For example, the Mac OS X kernel XNU, while based on the Mach 3.0 microkernel, includes code from the BSD kernel in the same address space in order to cut down on the latency incurred by the traditional microkernel design. Most modern operating systems today fall into this category, Microsoft Windows NT and successors being the most popular examples.

Windows NT's microkernel is called the kernel, while higher-level services are implemented by the NT executive. The Win32 personality was originally implemented using a user-mode server, but in recent versions has been moved into the supervisor address space. Various servers communicate through a cross-address-space mechanism called Local Procedure Call (LPC), and notably use shared memory in order to optimize performances.

DragonFly BSD is the first non-Mach based BSD OS to adopt a hybrid kernel architecture.

Other Hybrid kernels are:

Some people confuse the term "Hybrid kernel" with monolithic kernels that can load modules after boot. This is not correct. "Hybrid" implies that the kernel in question shares architectural concepts or mechanisms with both monolithic and microkernel designs - specifically message passing and migration of "non-essential" code into userspace while retaining some "non-essential" code in the kernel proper for performance reasons.

Missing image
Graphical overview of a hybrid kernel

Atypical Microkernels

There are some microkernels which should not be considered to be pure microkernels, because they do not implement some functions such as server services. These "atypical" microkernels are characterized by a vast number of features which mark them as belonging to the "large microkernel" family. Foremost known in this category is Exec, the kernel of AmigaOS, and its direct descendant ExecSG (or "Exec Second Generation").


Exokernels, also known as vertically structured operating systems, are a new and rather radical approach to OS design.

The idea behind exokernels is to force as few abstractions as possible on developers, enabling them to make as many decisions as possible about hardware abstractions. Exokernels are tiny, since functionality is limited to protection and multiplexing of resources.

Classic kernel designs (both monolithic and microkernels) abstract the hardware by hiding resources under a hardware abstraction layer or behind device drivers. For example, in these classic systems, if physical memory is allocated, one cannot assure its actual placement.

Exokernels enable low-level access to hardware; applications and abstractions may request a specific memory addresses, disk blocks etc... ; the kernel only ensures that the requested resource is free, and the application is allowed to access it. This low-level hardware access allows the programmer to implement custom abstractions, and omit unnecessary ones, most commonly to improve a program's performance.

Exokernels utilize library operating systems (libOSes) to provide abstractions. libOSes provide application writers with higher-level, traditional OS abstractions, though in a more flexible manner since applications may implement custom abstractions. Theoretically, an exokernel systems could provide libOSes so different kinds of operating systems (Windows, Unix) to run under a single exokernel.

The exokernel concept has been around since at least 1995 [3] (, but as of 2005 exokernels are still a research effort and it was not used in any major commercial operating systems. A concept operating exokernel system is Nemesis, written by University of Cambridge, University of Glasgow, Citrix Systems, and the Swedish Institute of Computer Science. MIT has also built several exokernel based systems, including ExOS.

Missing image
Graphical overview of a Exokernel


TUNES Project [4] ( and UnununiumOS [5] ( are no-kernel [6] ( experiments. No-kernel software is not limited to a single centralizing entry point.

See also

External links

ca:Nucli del sistema operatiu cs:Kernel de:Kernel es:Kernel fr:Noyau (informatique) ko:커널 (컴퓨터) id:Kernel (Ilmu komputer) is:Strikerfiskjarni it:Kernel lt:Branduolys (OS) nl:Kernel ja:カーネル no:Kjerne (datamaskiner) pl:Jądro systemu operacyjnego pt:Kernel fi:Kyttjrjestelmn ydin sv:Operativsystemskrna zh:内核


  • Art and Cultures
    • Art (
    • Architecture (
    • Cultures (
    • Music (
    • Musical Instruments (
  • Biographies (
  • Clipart (
  • Geography (
    • Countries of the World (
    • Maps (
    • Flags (
    • Continents (
  • History (
    • Ancient Civilizations (
    • Industrial Revolution (
    • Middle Ages (
    • Prehistory (
    • Renaissance (
    • Timelines (
    • United States (
    • Wars (
    • World History (
  • Human Body (
  • Mathematics (
  • Reference (
  • Science (
    • Animals (
    • Aviation (
    • Dinosaurs (
    • Earth (
    • Inventions (
    • Physical Science (
    • Plants (
    • Scientists (
  • Social Studies (
    • Anthropology (
    • Economics (
    • Government (
    • Religion (
    • Holidays (
  • Space and Astronomy
    • Solar System (
    • Planets (
  • Sports (
  • Timelines (
  • Weather (
  • US States (


  • Home Page (
  • Contact Us (

  • Clip Art (
Personal tools