Simply put, virtualization means making something look like something else. Technically speaking, virtualization refers to the abstraction of computer resources. This abstraction can be achieved in various ways: via software, hardware, or a mix of both.
Virtualization technologies have been around in various forms for a long time. This technology has been especially pervasive in recent years due to various factors discussed later on. This chapter discusses some common virtualization techniques on Linux platforms.
We also cover containers (containerization), which provide ways for packaging and delivering applications or even entire operating systems in self-contained environments without the overhead of traditional machine virtualization technologies. Specifically, we discuss the popular Docker platform as our container implementation, and we show how to deploy a web server in a container.
The advantages and the reasons behind the proliferation of virtualization are many and can be grouped broadly into technical and nontechnical factors. The nontechnical factors include the following:
• Desire for more sustainable (greener) computing model. Common sense tells us that ten virtual machines running on one physical server have a smaller carbon footprint than ten physical machines serving the same purpose.
• Cost-saving advantages. Virtualization helps to save on costs of acquiring and maintaining hardware. Again, common sense tells us that ten virtual machines are, or should be, cheaper than ten physical machines.
• Greater return on investment (ROI). Increased utilization and better leveraging of existing hardware leads to greater ROI for organizations and individuals.
Some of the technical factors include the following:
• Virtualization improves server and application availability and reduces server downtimes. Reduced server downtimes can be achieved by using different techniques such as live host migration.
• Virtualization complements cloud computing. Virtualization is arguably one of the most important enablers of today’s cloud-centric approach to computing, data management, and application deployment.
• Virtualization offers better cross-platform support. For example, virtualization makes it possible to run a Microsoft Windows operating system within Linux or to run a Linux-based operating system within Microsoft Windows.
• Virtualization provides a great environment for testing and debugging new applications and/or operating systems. Virtual machines can be wiped clean quickly or restored to a known state. In the same vein, virtual machines can be used to test and run legacy or old software. Virtual environments are also often easier and quicker to set up.
In this section, we try to lay the groundwork for common virtualization concepts and terminologies that appear in this chapter and that are used in everyday discussions about virtualization:
• Guest OS (VM) Also known as a virtual machine (VM). The operating system that is being virtualized.
• Host OS The physical system on which the guest operating systems (VMs) run.
• Hypervisor (VMM) Also referred to as the virtual machine monitor (VMM). A hypervisor provides a CPU-like interface to virtual machines or applications. It is at the heart of the entire virtualization concept and can be implemented with support built natively into the hardware, purely in software, or a combination of both.
• Full virtualization Also known as bare-metal or native virtualization. The host CPU(s) has extended instructions that allow the VMs to interact with it directly. Guest OSs that can use this type of virtualization do not need any modification. As a matter of fact, the VMs do not know—and need not know—that they are running in a virtual platform. Hardware virtual machine (HVM) is a vendor-neutral term used to describe hypervisors that support full virtualization.
In full virtualization, the virtual hardware seen by the guest OS is functionally similar to the hardware on which the host OS is running.
Examples of vendor CPUs and platforms that support the required extended CPU instructions are Intel Virtualization Technology (Intel VT), AMD Virtualization (AMD-V), and IBM z Systems.
Examples of virtualization platforms that support full virtualization are Kernel-based Virtual Machine (KVM), Xen, IBM’s z/VM, VMware, VirtualBox, and Microsoft’s Hyper-V.
• Paravirtualization Another type of virtualization technique. Essentially, this class of virtualization is done via software. Guest operating systems that use this type of virtualization typically need to be modified. To be precise, the kernel of the guest OS (VM) needs to be modified to run in this environment. This required modification is the one big disadvantage of paravirtualization. This type of virtualization is currently relatively faster than its full virtualization counterparts.
Examples of virtualization platforms that support paravirtualization are Xen and UML (User-Mode Linux).
• Containerization This is a little difficult to explain/define because it is not a pure virtualization technique in the classical sense. Containerization may not even be a word for all we know! But we’ll give the definition a shot: containerization refers to a technique for isolating very specific parts/components of the operating system to enable an application or feature to function almost autonomously. (Notice how we cleverly avoided using the word virtualization in that definition?)
Even though containerization is somewhat an emerging technique, we should be clear that it is by no means a new technique. The basic ideas and concepts around containerization have been around for a long while in various operating systems. The tools for implementing and managing containerization as well as the possible use cases are what are somewhat new and emerging. One such popular tool is Docker. We’ll discuss application containerization as implemented in Docker in more detail later in this chapter.
Many virtualization implementations run on Linux-based systems (and Windows-based systems). Some are more mature than others, and some are easier to set up and manage than others, but the objective of virtualization remains pretty much the same across the board.
We’ll look at some of the more popular virtualization implementations in this section.
This is Microsoft’s virtualization implementation. It currently can be used only on hardware that supports full virtualization (that is, Intel VT and AMD-V processors). Hyper-V has a great management interface and is well integrated with the newest Windows Server family of operating systems.
Kernel-Based Virtual Machine (KVM)
This is the first official Linux virtualization implementation to be implemented in the Linux kernel. It currently supports only full virtualization. KVM is de rigueur in this chapter.
QEMU falls into the class of virtualization called “machine emulators and virtualizers.” It can emulate a completely different machine architecture from the one on which it is running (for example, it can emulate an ARM architecture on an x86 platform). The code for QEMU is open source and mature, and as such it is used by many other virtualization platforms and projects.
This is a popular virtualization platform. It is well known for its ease of use and nice user interface. It has great cross-platform support. It supports both full and paravirtualization virtualization techniques.
This is one of the earliest and most well-known mainstream commercial virtualization implementations. It offers great cross-platform support, an excellent user and management interface, and great performance. Several VMware product families are available to cater to various needs (from desktop needs all the way to enterprise needs). Some versions of VMware are free (such as VMware Server and VMware Player), and some are purely commercial (such as VMware vSphere [ESXi], VMware Workstation, and so on).
This is another popular virtualization implementation in the FOSS world. The code base is quite mature and well tested. It supports both the full and paravirtualization methods of virtualization. Xen is considered a high-performing virtualization platform. The Xen project is sponsored by several large companies, and the open source interest is maintained at www.xenproject.org.
In the Xen world, domain is a broad term used to describe the access level in which a VM runs. The two common domains are as follows:
• Domain 0 (dom0) This is the control or management domain. It refers to a special VM with special privileges and capabilities to access the host hardware directly. It is often responsible for starting other VMs that run in the user domain (domU).
• Domain U (domU) This is the user domain. It is an unprivileged domain where the virtual machines (guests VMs) running within it do not have direct access to the host hardware.
KVM is the official Linux answer to providing a native virtualization solution via the Linux kernel. KVM gives the Linux kernel hypervisor capabilities. Current stable implementations of KVM are supported on the x86 platforms that support virtualization CPU extensions (such as those provided in the Intel VT and AMD-V lines).
Because KVM is implemented directly in the Linux kernel, it has great support across a wide variety of Linux distros. This means that on a bare-bones KVM setup, you should be able to use the same set of instructions provided in the following sections on any Linux distro.
The /proc/cpuinfo pseudo-file-system entry provides details about the running CPU on a Linux system. Among other things, the entry shows the special flags or extensions that the running CPU supports.
TIP You might need to toggle on the virtualization switch/option in the system BIOS or UEFI of some systems to enable support for full virtualization. The exact name of the option and sequence for doing this varies from manufacturer to manufacturer, so your mileage may vary. Your best bet is to consult the documentation for your specific hardware. On systems where this is necessary, the Linux kernel may not be able to see and make use of the virtualization flags in the CPU until the proper options are enabled.
On an Intel platform, the flag that shows support for full hardware-based virtualization is the
vmx flag. To check whether an Intel processor has support for
vmx, you could
grep for the desired flag in /proc/cpuinfo, like so:
The presence of
vmx in this sample output shows that necessary CPU extensions are in place on the Intel processor.
On an AMD platform, the flag that shows support for full hardware-based virtualization is the Secure Virtual Machine (
svm) flag. To check whether an AMD processor has support for
svm, you could
grep for the desired flag in /proc/cpuinfo, like so:
The presence of
svm in this sample output shows that necessary CPU extensions are in place on the AMD processor.
As mentioned, KVM has great cross-platform/distro support. In this section, we will look at a sample KVM implementation on the Fedora distribution of Linux.
We’ll use a set of tools that is based on the libvirt C library. In particular, we will be using the Virtual Machine Manager (
virt-manager) application tool kit, which provides tools for managing virtual machines. It comprises both full-blown GUI front-ends and command-line utilities.
In this example, we will use the
virt-install utility, a CLI tool that provides an easy way to provision virtual machines. It also exposes an API to the GUI
virt-manager application, which then leverages that for various items, such as its graphical VM creation wizard.
Following are the specifications on our sample host system:
• Hardware that supports full virtualization (specifically AMD-V)
• 32GB of RAM and sufficient free storage space on the host disk
• Host OS running Fedora distribution of Linux
For our sample virtualization environment, our objectives are as follows:
• Use the built-in KVM virtualization platform.
• Set up a guest OS (VM) running a Fedora distribution of Linux. We will install Fedora using the install media in the form of an ISO file downloaded and saved as /media/Fedora-Server-dvd-x86_64*.iso on the host system. If you want to set up a guest VM running Fedora or some other Linux distro, substitute the sample ISO filename used here with the actual filename of a real ISO file that you possess (for example, openSUSE-Tumbleweed-DVD-x86_64-Current.iso, Fedora-Workstation-Live-*.iso, rhel-8.*-x86_64-dvd.iso, ubuntu-20.04-desktop-amd64.iso, and so on).
• Allocate a total of 10GB of disk space to the VM.
• Allocate 2GB RAM to the VM.
We will use the following steps to achieve our objectives:
dnf to install the Virtualization package group. This package group comprises various individual packages (such as virt-install, qemu-kvm, and virt-manager) that provide a virtualization environment.
2. On systemd-enabled distros, you can start the
3. Use the
systemctl utility to make sure that the
libvirtd service starts up automatically during system boots:
4. Use the
virsh utility to make sure that virtualization is enabled and running properly on the system:
As long as the previous output does not return any errors, we are fine.
5. On our sample server, we will store all the backing storage files pertaining to each VM under a custom directory path named /home/vms/.
We will begin by creating the directory structure that will house our VM images:
6. Use the
virt-install utility to set up the virtual machine. The
virt-install utility supports several options that allow you to customize the new VM at installation time. Launch
virt-install by running the following:
The meaning of the parameters used in the previous command are explained here:
7. The newly configured VM should start up immediately in the Virt Viewer window. The VM will attempt to boot from the install media (ISO file) referenced in the value of the
cdrom option. A window similar to the one shown here will open:
From here on, you can continue the installation as if you were installing on a regular machine (see Chapter 2). That’s it!
virt-install command offers a rich set of options. You should definitely take a minute to look over its manual (
man virt-install). For example, it has options (
--os-variant) that will allow you to optimize the configuration for different guest operating system platforms (such as Windows, Linux, UNIX, and so on) out of the box.
Managing KVM Virtual Machines
In the preceding section, we walked through initially setting up a virtual machine. In this section, we will look at some typical tasks associated with managing our guest virtual machines.
We’ll use the feature-rich
virsh program for most of our tasks.
virsh is used for performing administrative tasks on virtual guest domains (machines), such as shutting down, rebooting, starting, and pausing the guest domains.
virsh is based on the libvirt C library.
virsh can run directly from the command line using appropriate options, or it can run directly inside its own command interpreter. We will use
virsh inside its own shell in the following examples:
1. To start
virsh in its own minimal interactive shell, type the following:
virsh has its own built-in help system for the different options and arguments that it supports. To see a quick help summary for all supported arguments, type this:
3. To list all the configured inactive and active domains on the hypervisor, type the following:
The output shows that the fedora-demo-VM guest domain is currently shut off (inactive).
4. To view detailed information about the fedora-demo-VM guest domain, type this:
5. Assuming the fedora-demo-VM guest is not currently running, you can start it by running the following:
6. Use the
shutdown argument to shut down the fedora-demo-VM guest gracefully:
7. If the fedora-demo-VM guest has become wedged or frozen and you want to power it off ungracefully (this is akin to yanking out its power cable), type this:
8. To undefine the fedora-demo-VM guest domain or remove its configuration from the hypervisor, type this:
Setting Up KVM in Ubuntu/Debian
We mentioned that one main difference between the virtualization implementations on the various Linux distros is in the management tools built around the virtualization solution.
The KVM virtualization that we set up earlier used management tools (
virt-install, and so on) that work relatively seamlessly across various platforms. Here, we will run through a quick-and-dirty setup of KVM virtualization using lower-level tools that should work with little modification on any Linux distro.
Specifically, we will look at how to set up KVM in a Debian-based distro, such as Ubuntu. The processor on our sample Ubuntu server supports the necessary CPU extensions. We will be installing on a host computer with an Intel VT–capable processor.
The target virtual machine will be any recent copy of the desktop version of Ubuntu and will be installed using the ISO image downloaded from http://releases.ubuntu.com.
1. Install the KVM and QEMU packages. On the Ubuntu server, type this:
2. Manually load the kvm-intel module:
NOTE Loading the kvm-intel module will also automatically load the required kvm module. On an AMD-based system, the required module is instead called kvm-amd.
3. We are going to run KVM as a regular user, so we need to add our sample user (yyang) to the kvm system group:
4. Log out of the system and log back in as the user yyang so that the new group membership can take effect.
5. Create a folder in the user’s home directory to store the virtual machine and then change into that directory:
6. We’ll use the
qemu-img utility to create a disk image for the virtual machine. The image will be 10GB in size. The file that will hold the virtual disk will be named disk.img. Type the following:
-f option specified with the
qemu-img command is used to specify the disk image format. Here we use the
qcow2 format. This format offers space-saving advantages by not allocating the entire disk space specified up front. Instead, a small file is created, which grows as data is written to the virtual disk image.
7. Once the virtual disk image is created, we can fire up the installer for the VM by passing the necessary options to the
kvm command directly. Here’s the command:
Here are the options that were passed to the
-m Specifies the amount of memory to allocate to the VM. In this case, we specified 2048MB, or 1GB.
-cdrom Specifies the virtual CD-ROM device. In this case, we point to the ISO image that was downloaded earlier and saved under the current working directory.
-boot d Specifies the boot device. In this case,
d means CD-ROM. Other options are floppy (
a), hard disk (
c), and network (
disk.img Specifies the raw hard disk image. This is the virtual disk that was created earlier using
8. The newly configured VM should start up immediately in the QEMU window. The VM will attempt to boot from the ISO image specified by the
-cdrom option. A window similar to the following will open.
9. From here on, you can continue the installation as if you were installing on a regular machine (see Chapter 2). The particular version of Ubuntu that we use in this example is Live Desktop. Among other things, this means that you can try out the operating system and use it without actually installing/writing anything to disk if you don’t want to.
10. Once the operating system has been installed into the VM, you can boot the virtual machine by using the
You will notice in the preceding steps that we didn’t need to specify the ISO image as the boot media anymore since we are done with the installation. That’s it!
Containers refers to the class of quasi-virtualization techniques that use various mechanisms to provide isolated runtime environments for applications (or even entire operating systems). The idea behind containers is to provide only the minimum and most portable set of requirements that an application or software stack requires to run. The virtualization solutions (KVM, VMware, and so on) that we discussed earlier in the chapter are geared toward virtualizing almost every aspect of the computing infrastructure, including the hardware, the operating system, and so on.
As already hinted, the basic premise behind containerization has been around in the UNIX world for a while; the only new things these days are the implementations and the supporting tools around the concept. Examples of old and current implementations (and engines) are chroot, LXE, libvirt LXC, systemd-nspawn, Solaris Zones, FreeBSD Jails, containerd, Docker, podman, and Kubernetes. Respectively, Docker and Kubernetes are popular container implementations and orchestration platforms. We’ll focus on Docker in this section.
Containers vs. Virtual Machines
Even though containers and VMs appear to overlap in functionality, definition, and use cases, they are quite different beasts. Here are some of their similarities and differences:
Docker is a set of tools and interfaces used for developing, managing, shipping, and running applications in the form of containers. Docker can make use of various mechanisms to access any required internals/interface of the Linux kernel. Broadly speaking, it uses the namespace and control groups (cgroups) interface exposed in the Linux kernel to provide its isolation and resource-sharing benefits.
Some terms and concepts are unique and commonly used in the Docker world. Here, we outline and explain some of these concepts:
• Images Docker images form the building blocks for the Docker ecosystem. They are the read-only blueprints from which containers are created. When Docker runs a container from an image, it adds a read-write layer on top of the image.
• Containers Docker containers are the actual workhorses created from Docker images to run the applications.
• Registry This refers to any repository or source of Docker images. The default/main registry is a public repository called Docker Hub, which contains thousands of images contributed by various open source projects, companies, and individuals. The registry can be accessed via its web interface (https://hub.docker.com) or via various client tools.
• Docker Host Host system on which the Docker application is installed.
• Docker daemon The daemon is responsible for managing the containers on the Docker Host where it is running. It receives commands from the Docker client.
• Docker client Consists of the user-land tools and applications that issue commands to the daemon to perform container-related management tasks.
• dockerfile This is a type of configuration file that describes the steps needed to assemble an image.
Docker Installation and Startup
On our sample Fedora server, we’ll walk through installing Docker, starting the daemon service, and enabling it for future automatic startups in the following steps:
1. While logged in as a privileged user, use
dnf to install Docker by running the following:
2. After the software has been successfully installed, use
systemctl to check the status of the docker service:
3. If the output shows that the service is disabled from automatic startup, you can simultaneously enable it and start it by running this:
TIP If the docker daemon fails to start on Red Hat–like distros with a Btrfs-formatted file system running SELinux in enforcing mode, you can try working around this issue by removing
--selinux-enabled from the
OPTIONS line in the /etc/sysconfig/docker file. Save your changes to the file and then try to start docker again.
TIP Some newer Linux distros have enabled Control Group V2 (cgroupv2) by default, but unfortunately not all applications have been ported to use and take advantage of its advanced features (such as unified hierarchy). To get older docker versions working on such systems during this weird transitionary period, you might need to disable full-fledged cgroupv2 and enable the older cgroupv1 via a kernel boot flag. For example, on an affected EFI Fedora distro, you can append the systemd.unified_cgroup_hierarchy=0 flag to the GRUB_CMDLINE_LINUX parameter in the /etc/default/grub file and generate an updated GRUB boot configuration file using grub2-mkconfig and then reboot when done. To generate an updated GRUB configuration, run:
Using Docker Images
Docker images are the “bits and bobs” that make up containers. You will find thousands of images created by various projects, organizations, and individuals in the official Docker public registry. The registry stores images that are built for various uses, ranging from entertainment/multimedia uses, development environments, and various network servers (HTTP, DNS, VoIP, database and so on), all the way to complete operating systems that have been packaged into containers.
We will walk through setting up a container using a Docker image created by the official Apache HTTP Server Project (http://httpd.apache.org/):
1. Query your local Docker Host for any existing images:
The listing should be empty on a brand-new Docker Host/installation.
2. Search the public Docker registry for images that have the keyword httpd in their name:
The output should display various available images. The NAME and OFFICIAL columns provide hints as to whether the image was uploaded by the official project maintainers or by individuals. The STARS column hints at the image popularity.
3. On our sample system, we will select and pull down the httpd image from the official source (NAME = docker.io/httpd), which also happens to be the one with the highest rating. Type the following:
4. Query your local image repository to list available images now on your system:
Using Docker Containers
We downloaded the official httpd Docker image in the previous section. The image is not terribly useful in its current form, as it is just a bunch of files on our file system. To make the image useful, we have to spin up our very own container from it and customize it for our own use.
1. Without further ado, let’s fire up our httpd container:
Here are the options we passed to the
--net Specifies the type of networking we want to use for the container. We’ve specified the
host networking option.
--name This is just a descriptive name to assign to this container instance. We are using
my-1st-container as the name in this example.
-p (abbreviated form of
--publish) Maps or publishes a container’s port(s) to the host. Here, we are mapping port 80 (default HTTP port) on the container to port 80 on the host. The syntax is
-d (abbreviated form of
--daemon) Runs the container in the background. This runs the container as a daemon.
docker.io/httpd Specifies the image name. Here, we are referring to the image that we downloaded earlier.
docker for the list of running containers:
3. Once the container launches successfully, you can point your web browser to the IP address of the Docker Host to view the default/basic web page being served from the container or use any capable CLI program like
The IP address of our sample Docker Host is 192.168.1.100, so we’ll use
curl to browse the web server running in the container, like so:
TIP The actual port that was mapped from the container to the host needs to be accessible through any host-based firewall rules. For this
httpd container example, you’ll need to open port 80 (host port) on the Docker Host. On Red-Hat like distros, you can use the
firewall-cmd to do this, like so:
4. If you’ve made some changes and you need to restart the container, type the following:
5. To immediately stop the my-1st-container container process, type this:
6. To permanently delete the sample (not currently running) my-1st-container container, type the following:
Numerous virtualization technologies and implementations exist today. Some of them have been around much longer than others, but the ideas and needs remain almost the same. Virtualization and container technologies are smack in the middle of the evolution and adoption of cloud computing technologies.
We looked at common virtualization offerings in the Linux world. We paid particular attention to the KVM platform because of its native and complete integration into the Linux kernel. We showed two examples of actually setting up and using KVM to create two virtual machines on a Fedora box and an Ubuntu server.
Finally, we covered Linux containers with a focus on the Docker application as our implementation. We showed how to find and pick from existing Docker images. We then launched a complete web server (Apache HTTP Server) in a small-sized container that can be easily reused on other Linux distros.
Building on the material in this chapter and other parts of the book, Appendix B provides an in-depth coverage on how to obtain and make use of virtual machine images and containers that were created specially to complement this book. We hope these resources will provide you with hands-on and real-world tools to explore some of the technologies discussed—in a safe virtual environment. We encourage you to download the images and containers, explore them, tear them apart, break them, fix ’em and make ’em better—in true system administrator style!