No results found

Your search did not match any results.

We suggest you try the following to help find what you’re looking for:

  • Check the spelling of your keyword search.
  • Use synonyms for the keyword you typed, for example, try “application” instead of “software.”
  • Try one of the popular searches shown below.
  • Start a new search.
Trending Questions

Resource Management as an Enabling Technology for Virtualization

by Detlef Drewanz
Published December 2012 (reprinted from eStep blog)

This article, which is Part 4 in a series of virtualization articles, discusses IT resource management as a technology that enables virtualization.

In this article and the next, we will cover some enabling technologies for virtualization. Here, we discuss IT resource management as an enabling technology for virtualization.

In the first article of this series, we used the following definition of virtualization from Wikipedia:

"Virtualization, in computing, is the creation of a virtual (rather than actual) version of something, such as a hardware platform, operating system, storage device or network resources."

Resources are the foundational elements that get virtualized by the different virtualization technologies. Resources include the following:

  • Hardware such as the CPU, memory, or devices
  • The network
  • The operating system
  • The desktop
  • A general software layer

Why Is Resource Management Important?

Resource management limits access to shared resources, but it also monitors resource consumption and collects accounting information.

The management of resources is important, because many consumers, such as virtual machine (VMs), zones, containers, or virtual desktops, request resources. Consolidating different workloads on one system often entails combining workloads that have different service level agreements and different needs for throughput, response time, and availability.

But resources are always limited, and they are shared among many virtualization environments on one IT system; therefore, it is important to restrict access to specific shared resources, isolate resources from being used by certain workloads, or at least limit shared resource consumption of workloads. By doing that, we can guarantee a service level for each virtualized environment or influence its performance.

Without resource management, all workloads would be handled equally, based on their resource requests. The result could be that one VM consumes so much memory during runtime that other VMs on the same system get blocked and important memory requests can no longer be served, due to no available memory.

Another example of the importance of resource management is the ability to determine how many resources (for example, CPU) should be shown to or seen by a virtualized environment. This could be important for license or software-behavior reasons.

So the goals of resource management related to zones, containers, or VMs are as follows:

  • Prevent entities from consuming unlimited resources
  • Be able to change a priority, based on external events
  • Balance resource guarantees against the goal of maximizing system utilization

The mechanisms discussed in the remainder of this article are used to achieve these goals.


By using constraints, we set bounds on the consumption of specific resources. By doing that, we can control ill-behaved environments that would otherwise compromise the performance of other environments or the whole system or that might affect availability through unregulated resource requests.

Typically, constraints are enforced through resource controls, which are set by the system administrator. The following are examples of resource controls:

  • Controlling the number of used semaphores
  • Controlling the number of open files
  • Controlling the amount of virtual memory used
  • Controlling the number of processes
  • Controlling the amount of network bandwidth used

There are different ways a system can react if a specific bound has been reached:

  • Allowing the request, but letting the requester know that the bound has been reached
  • Capping resource delivery on the defined bound
  • Rejecting the whole resource request with an error message to the application
  • Generating an action on the system to free up resources and provide the requester with the needed resources

Depending on the implementation, applications or virtualized environments must be modified to know about resource controls and constraints. But the use of constraints is very flexible and enables the boundaries to be changed during runtime. Also, the use of constraints enables a workload to use free resources that have been assigned to but are not needed by a different workload.

Example 1: Constraints are important and useful for all kinds of shared parallel access to resources. Good examples include processes, projects, and Oracle Solaris Zones, which all use and share one kernel. Their resource consumption can be limited by the many resource controls that have been built into Oracle Solaris.

For example, the resource control zone.max-processes in Oracle Solaris 11 limits the number of processes a zone can run. It is important to limit processes, because the process table of the OS kernel is large but limited in size. This resource control limits the portion of a zone on the process table—and thus the number of processes a zone can run—and prevents an ill-behaved zone from doing things such as creating an infinite number of processes. With this resource control enabled, the kernel will, at some point, no longer allow the zone to create a new process.

Example 2: Another common shared resource in systems is the network, which controls the connection of workloads to the outside world. If all VMs share one network cable, the bandwidth consumption of each VM needs to be limited. We will cover this example more in detail in the next article.


With scheduling, we divide a resource into specific intervals and allocate them based on a predictable algorithm. If an allocation is not needed, the resource interval can be used by others.

An example of a scheduled resource is CPU time. With scheduling, the available CPU time is divided into allocation units that are used by applications. Scheduling-based resource management enables full utilization of a configuration. In a critical situation or an over-committed situation, the scheduling algorithm guarantees all applications have controlled access to the resource. The scheduling algorithm defines what "controlled access" means and under what situations the allocation units can be changed or assigned to an application, for example, based on the predefined importance of an application.

Example 1: Scheduling is achieved by using the fair share scheduler (FSS) in Oracle Solaris together with Oracle Solaris Zones. The FSS allows the allocation of CPU resources. A share can be assigned to each zone, and the shares are used to manage the CPU resources in the event that the zones compete for CPU time.

For example, if the workload is less than 100 percent, no management is done, since free CPU capacity is still available. However, if the workload reaches 100 percent, the FSS is activated and modifies the priority of the participating processes so that the assigned CPU capacity of a zone corresponds to its defined share. A defined share is calculated as the share value of an active zone divided by the sum of the shares of all active zones.

If the system is fully utilized, the FSS guarantees the response time of workloads based on CPU shares.

Example 2: Another example is the creation and handling of virtual CPUs (vCPUs) in Oracle VM Server for x86, if we do not directly tie the vCPUs to a physical CPU. In such a case, a vCPU is managed (scheduled) by a local run queue that "divides" a physical CPU into multiple vCPUs. This work is done by the hypervisor.

The queue is sorted by vCPU priority. In the queue, every vCPU gets its fair share of CPU resources. The priority that a vCPU would get can be managed by manipulating a relative-weight parameter and a cap parameter.

The relative-weight parameter is used to assign the amount of CPU cycles that a domain receives. A vCPU with a weight of 64 would receive twice as many CPU cycles as a vCPU with a weight of 32.

The cap parameter defines, as a percentage, the maximum amount of CPU cycles that a domain will receive. This is an absolute value. Setting the value to 100 means that the vCPU may consume 100 percent of the available cycles on a physical CPU. Setting it to 50 means that the vCPU can never consume more than half of the available cycles. Thus, this example is a combination of scheduling and constraints (capping).


Partitioning is used to assign a subset of resources to a workload, which guarantees that resources are always available to the workload. However, these resources cannot also be used by other workloads, because they are assigned and guaranteed to one specific workload. Thus, configurations that use partitioning can avoid overcommitment of resources. However, avoiding overcommitment reduces the ability to achieve high utilizations, because a reserved resource is not available for use by another workload when the assigned workload is idle. Typical examples of partitioning are the assignment of physical CPUs, parts of physical memory, or parts of the I/O system to workloads or virtualized environments.

Example 1: Let's discuss again the way Oracle VM Server for x86 handles CPUs. If we use partitioning to pin vCPUs to a physical CPU and we assign the vCPUs to domains, we have a partitioning of the CPU. When vCPUs are assigned in a fixed way to domains, fixed performance is always guaranteed. However, the vCPUs cannot be used by other domains, even if they are idle.

Example 2: Partitioning with Oracle VM Server for SPARC is used for several resource types. CPU and memory are always assigned directly to logical domains. There are also options for assigning PCI slots and the complete PCI infrastructure to certain domains. The advantages of this, if direct I/O is used, are high-performance domains with close to zero overhead and guaranteed performance.


Constraints, scheduling, and partitioning are basic mechanisms of resource management that guarantee various virtualization technologies access to limited and shared resources. These mechanisms are used for different resources based on the requirements of various workloads and virtualization technologies.

Partitioning is the most prevalent way to control resources in hypervisor-based virtualization environments. In this case, the hypervisor controls resources such as CPU, memory, privilege checks, and hardware interrupts.

To avoid overcommitment of the CPU resources, the CPU resources are typically partitioned and the physical CPUs are assigned as vCPUs to virtual environments. In some cases, a physical CPU is divided by a scheduler into multiple vCPUs, but this generates virtualization overhead and can lead to an overcommitment on CPU resources.

Memory is typically controlled by the memory management system of the hypervisor, which does the memory allocation and assignment to guests and protects memory based on rules. Some hypervisors do memory management by a direct physical assignment (partitioning) of memory areas to guests. In such cases, an overcommitment of memory resources is not possible. However, if a hypervisor does memory management via a virtual memory management system, an overcommitment of memory resources to guests is possible, and for performance reasons, this should be avoided.

See Also

About the Author

Detlef is a Principal Sales Consultant and is located in Potsdam, Germany. He acts as server and Oracle Solaris specialist on Oracle's Northern Europe Server Architects team. He joined Sun Microsystems in 1998 and is now part of Oracle. Prior to that, Detlef worked at Hitachi Internetworking Frankfurt in network support and as member of scientific staff in the Department of Computer Science of the University of Rostock. Detlef holds a master's degree in computer science.

Revision 1.0, 12/17/2012