Hypervisors have found their way into more applications and platforms as technology expands. This comprehensive guide explores what they are and how they are used.
Hypervisors have become faster and more innovative in the last five years. In this comprehensive guide, we explore types of hypervisors and their different uses.
A hypervisor is a software tool that provides an abstraction of a computer‚Äôs physical resources and creates a virtualized hardware layer. Hypervisors can create, run, and manage multiple isolated Virtual Machines (VMs) on a virtualized hardware layer.
In a hypervisor environment, the physical hardware is called the host, while the virtual machines that use the virtualized hardware resources are called guests.
Hypervisors underpin cloud computing technology by subdividing large and powerful physical machines into smaller units, which are easier to sell, maintain, and scale should the need arise. Hypervisors are fundamental in that they‚Äôre used by cloud vendors but generally operate unnoticed by end-users. Ultimately, the user gets an easily scalable and reliable virtual machine, while cloud vendors maximize the utilization of their hardware resources, thus allowing them to price their services competitively.
A Brief History of Hypervisors
The first hypervisors running full virtualization were the IBM experimental tool SIMMON and the CP-40 system in the late 1960s. SIMMON was created and used in the IBM Product Test Laboratory for research, while the IBM Cambridge Scientific Center developed the CP-40 that was quickly implemented and released as CP-67. These highly specialized tools were built for research purposes.
Until the introduction of hardware virtualization support in x86 processors in 2005, most hypervisors were used in large and expensive mainframe systems running UNIX. The original software virtualization tools for x86 platform computers were complex and slow. The introduction of hardware virtualization support in x86 processors, the rapid increase in computing power, and multi-core CPUs‚Äô introduction led to broad acceptance and use of hypervisors in the early millennium.
Types of hypervisors
There are two distinct types of hypervisors used for virtualization - type 1 and type 2:
Type 1 hypervisors run directly on the host machine hardware, eliminating the need for an underlying operating system (OS). They are usually used in data centers, on high-performance server hardware designed to run many VMs. Type 1 hypervisors offer better performance as they run directly on the underlying hardware and do not rely on the host‚Äôs OS.
Type 2 hypervisors are installed and run on a conventional operating system like any other computer application. They are usually deployed on desktop computers and workstations to run a different OS or to run a separate instance of an OS. Popular type 2 hypervisors are VMware Workstation, Oracle VirtualBox, and Parallels Desktop.
Gerald J. Popek and Robert P. Goldberg established this classification of hypervisors in their 1974 article Formal Requirements for Virtualizable Third Generation Architectures.
Benefits of Hypervisors
Some of the key benefits of using a hypervisor and hosting multiple virtual machines are:
Even the best hypervisors will cause a 1% to 5% reduction in performance leading to one of the downsides of its use: there tends to be incremental performance drops.
To provide some context around the various uses of the two types of hypervisors, we‚Äôll dive into the most popular hypervisors being used today.
Hyper-V, Microsoft‚Äôs hypervisor designed for Windows systems, is considered type 1 according to Microsoft. It runs on Windows Server Core, but Hyper-V inserts itself below the operating system and runs directly on the physical hardware.
Formerly known as XenServer, Citrix Hypervisor is a commercial type 1 hypervisor.
Open Source Hypervisors
Containers vs Hypervisors
While they are similar in some ways, containers and hypervisors are not a choice you have to make. Indeed, Hypervisors and containers are typically utilized simultaneously.
Hypervisors allow you to divide a single computer‚Äôs hardware resources between multiple VMs. Containers allow you to split a single computer into segregated logical namespaces. Containers are all about isolation, not virtualization. From an application development point of view, they look like the same thing, but they work in different layers.
Popular containerization tools, like Docker, can create and run multiple containers on the host‚Äôs Linux kernel. Every container has its specific network stack and its own process space, including all of the underlying dependencies required to run the application. Containers do not contain the operating system, so they are very compact, and start up in milliseconds. Containers provide an excellent platform for building and sharing packaged, ready-to-run applications, and microservices.
Containers run closer to the application layer, while hypervisors run closer to the hardware layer. In most cases, hypervisors run the VMs, and containers run on those VMs.
Hypervisors provide a secure environment for virtual machines, isolated from the rest of the system. In a modern cloud environment, virtualization and hypervisors are so common that cloud providers take care of hypervisor security and VM isolation.
Critical vulnerabilities cause the most significant security risk in the design of modern Intel, ARM, and IBM‚Äôs Power CPUs. These vulnerabilities were discovered and made public in 2018 under the names Meltdown and Spectre. The cause of these vulnerabilities was a design flaw in the CPU‚Äôs out-of-order execution mechanism. And because it is baked into silicon, there‚Äôs no easy way to get around the defect.
Hypervisors were affected in that different VMs on the same fully virtualized hypervisor could not access each other‚Äôs data, but different users on the same guest instance could access that data. The only way to mitigate this security risk was by changing the operating system kernel code to add increased isolation of kernel memory from user-mode processes. Software vendors quickly responded with updates to their operating systems and hypervisors, but with a performance drop between 5% and 30%, depending on the use case. In October 2018, Intel reported that they solved these issues in their latest CPU range.
Hypervisors are a fundamental cloud computing technology. Almost every cloud service in use today relies on them. From a hardware standpoint, the ever-increasing number of CPU cores and their performance boosts stand to offer even more potential.
Even inexpensive desktops and laptops feature multi-core CPUs, so it‚Äôs easy to see why hypervisors have become more popular. In addition to faster CPUs with more physical cores and multi-threading support, the availability and affordability of modern high-speed SSD (Solid State Drive) storage and speedier PCIe lanes also play an essential role in hypervisor performance.
Hypervisors will likely find their way to even more applications and platforms as technology expands. The demand for cloud services remains robust, the hardware is evolving to offer even more opportunities for cloud deployment, and software engineers worldwide are working tirelessly to keep up with demand and harness even more power and efficiency from cloud-based systems. The sooner we understand the need and benefits of hypervisors, the better off we will all be.