MicroKernel Architectural Pattern

Aykhan Nazimzada
6 min readNov 22, 2020

--

Word “Microkernel” includes the term “kernel” which originally means the nucleus or the core part of something. In computer science, it denotes the part of the operating system that directly controls the hardware and manages the available computer resources. This term first appeared in the context of operating systems to refer to the core part of an operating system. This kernel is further extended and enhanced by integrating and adding new functionalities and services usually in the form of layers and extensions. Different variants of this concept appeared in the area of operating systems. Two important kinds were the “Microkernel” and the “Monolithic kernel”. In general, they both represents the core of the operating system, manage computer resources, and provide basic operating system services. The only difference is that in the microkernel approach which appeared in around the early 80s, the core of the operating system is smaller and more efficient. More specifically, the user services and the kernel services are implemented in different address spaces. The kernel is kept to a minimum comprising only services that manage process in memory. Whereas in the monolithic kernel, all services including user services are implemented in the same address space. This makes the core of the operating system bigger in size and less flexible. This doesn’t mean if we use the microkernel approach, those services won’t exist. It means that they are separate from the core and we have some sort of flexibility in to choosing what services to implement, change or eliminate. As long as we have a well-defined core API, we can extend the core and customize the operating system as needed. The good news is that this concept and the principles of such approach are not restricted in the context of operating systems development only. This approach which became an architectural pattern over time proved to be very effective in building software applications. Now with the revolution of distributed computing, it became a good blueprint for some types of web-based systems.

We have passed through the analogy of term microkernel. Now, what is the “Microkernel Pattern”?

“The Microkernel architectural pattern applies to software systems that must be able to adapt to changing system requirements. It separates the minimal functional core from extended functionality and customer specific parts. The microkernel also serves as a socket for plugging in these extensions and coordinating their collaboration.”

Let’s start by defining the major components in this architecture pattern. In theory it has 2 main parts; core and extensions. However, in case of architectural pattern, it consists of five main elements; core, internal servers, external servers, adapter, and clients.

5 Major Components of MicroKernel Architecture

Core consists of the functional and operational code that is necessary for the application to run. It contains the minimal set of functions for the system to be operational meaning that it is the code that gets executed every time the system is used. This element takes the form of a monolithic mini application. In some cases, the core system is a layered application that exists in one place. Because the core components in this pattern provides a plug-in mechanism, that gives the developer sort of flexibility in terms of what functionality is provided to the user. Because of this, the core components need to include mechanisms for coordinating and managing the collaboration of the various plugged modules. Especially, web-based systems are built using the microkernel approach which means oftentimes plugins will be distributed and will run in separate processes. Hereby, the core system needs to have a mechanism to check the availability and responsiveness of the internal servers.

Core

Internal servers are directly related to the core. Remember that the core should be kept to a minimum so that when it is shared among different applications or for coordination purposes, it will maintain its performance. If the system requires more core functionality than the microkernel should contain, they will move to internal servers. We implement the additional functionality of the system through internal servers on separate processes. Now, we could see the core and the internal servers as one entity that comprised the basic functionality of the system. These servers may operate independently with the core or may have to interoperate. In the latter case, the core will handle their coordination. This has several advantages such as flexibility, portability, and ease of maintenance. Internal servers could exist on the same computer or be distributed over a network.

Internal Servers

External servers are quite different from internal servers. These extensions or servers are obstructions of the microkernel, and they implement their own business logic over it. In other words, we could use the same core and internal servers for different but similar application domains, but the external servers will differ depending on the application domain. For example, in financial technology or banking systems, different enterprises with different policies could use the same core and internal servers, but different external servers implements different abstractions and business logic. So, external servers are used to build or create different versions of the system. External servers also run on separate processes and expose their functionality through an API.

External Servers

Clients are external components to the system, and they seek its functionality through external servers. A client is associated with only one external server, and it communicates with it through an adapter which serves as a middleware or as an API gateway in microservices system. We use an adapter to avoid coupling. Job of an adapter is to forward client requests and avoid the need for the client to change as the external servers functionality in API change. An adapter is not required for such systems, but it is preferable.

Clients and Adapter

Now, let’s see microkernel system in action. First the client calls the system requesting a service. Then, the adapter received the call does something with it if required and forwards the call to the microkernel. Since the microkernel handles the communication and coordination side of the story, it fetches the address of the necessary external server and sends it back to the adapter. The adapter then establishes a connection between the client and the external server, sends the appropriate request to the server. After receiving the request, the external server analyzes the requests, prepares the answer, and sends it back upon completion. Finally, the adapter sends back the response to the client.

How MicroKernel Architecture works

We just illustrated the simple flow of the processes. However, there are a lot of complexities such as whether the functionality required by the external server to fulfill the request is provided by the microkernel or the internal servers. The main point in this pattern is proper communication and coordination. Famous examples of applications of Microkernel Architectural Pattern are Operating Systems, Financial applications, IDEs, Text Editors, and Browsers.

Now, we have a general understanding about microkernel architectural pattern. In this place, let’s look at its trade-offs.

Advantages:

· Extensibility

· Portability

· Agility

· Responsiveness

· Ease of Development

· Ease of Testing

· Ease of Maintenance

Limitations:

· Scaling

· High Complexity

· Too many inter-process communication

Microkernel Pattern can also be used in hybrid architectures. It’s best way to benefit from its advantages and avoid limitations. If you are interested in, you can also read my article “Software Architecture & Design” which consists of 2 parts and provides brief information about software architecture and famous architectural patterns.

--

--