VMware Cluster Recommendations

Best Practices for Oracle ZFS Storage Appliance and VMware vSphere 5.x: Part 6

by Anderson Souza

This article provides recommendations that apply to VMware vSphere 5.x cluster with Oracle ZFS Storage Appliance.


Published July 2013


This article is Part 6 of a seven-part series that provides best practices and recommendations for configuring VMware vSphere 5.x with Oracle ZFS Storage Appliance to reach optimal I/O performance and throughput. The best practices and recommendations highlight configuration and tuning options for Fibre Channel, NFS, and iSCSI protocols.

Want to comment on this article? Post the link on Facebook's OTN Garage page.  Have a similar article to share? Bring it up on Facebook or Twitter and let's discuss.

The series also includes recommendations for the correct design of network infrastructure for VMware cluster and multi-pool configurations, as well as the recommended data layout for virtual machines. In addition, the series demonstrates the use of VMware linked clone technology with Oracle ZFS Storage Appliance.

All the articles in this series can be found here:

Note: For a white paper on this topic, see the Sun NAS Storage Documentation page.

The Oracle ZFS Storage Appliance product line combines industry-leading Oracle integration, management simplicity, and performance with an innovative storage architecture and unparalleled ease of deployment and use. For more information, see the Oracle ZFS Storage Appliance Website and the resources listed in the "See Also" section at the end of this article.

Note: References to Sun ZFS Storage Appliance, Sun ZFS Storage 7000, and ZFS Storage Appliance all refer to the same family of Oracle ZFS Storage Appliances.

Recommendations for VMware vSphere Cluster Options

VMware vSphere 5.x cluster configuration is beyond the scope of this article; however, when working with the Oracle ZFS Storage Appliance, the following options are recommended:

  • Work with vSphere high availability (HA) and vSphere Distributed Resources Scheduler (DRS) cluster options.
  • At the cluster automation level, use the "fully automated" option, and choose the priority level that best fits your virtual environment.
  • For the power management cluster (DPM), choose the automatic option and select the DPM threshold that best fits your virtualized environment.
  • Enable host monitoring options and admission control.
  • Choose the virtual machine restart option for your cluster. The example in this article reflects the "VM restart medium priority" and "Powered on for host isolation response" options.
  • Enable the VM monitoring option and choose the sensitivity that best fits your virtualized environment.
  • Enable the "Enhanced vMotion Compatibility" option for your cluster. Choose the right VMware EVC mode for your CPU (AMD or Intel).
  • For the swap file, select the option "Store a swapfile in the same directory as the virtual machine." Use a central datastore for swap files.

Using the Datastore Heartbeating Feature

For better HA management and also to avoid false positives due to network problems, VMware vSphere 5.0 added a new HA feature called Datastore Heartbeating. A heartbeating datastore can be any datastore shared across VMware hosts. With this feature, VMware hosts are able to exchange heartbeats utilizing shared VMFS datastores.

Note: The datastore heartbeating configuration needs to be performed after VMware datastore configuration.

To enable the datastore heartbeating feature for a VMware HA cluster with two nodes, you will need at least two shared datastores. Right-click your VMware cluster profile. In the following example, the cluster name is ESXi5. Select the Datastore Heartbeating option and choose Select any of the cluster datastores, as shown in Figure 1.

Figure 1

Figure 1. Enabling datastore heartbeating for ESXi5 in the VMware vSphere 5.x client

Recommendations for Virtual Machine Data Layout

Recommendations for virtual machine data layout as well as best practices for a VMware virtual machine working with the Oracle ZFS Storage Appliance are as follows:

  • Work with VMware virtual machine version 8.
  • To improve storage efficiency and performance, configure your virtual machine with a thin provisioning virtual disk drive using a VMware paravirtual SCSI controller.
  • For raw devices and for LUNs with more than 2TB, use raw device mapping (RDM).
  • When working with ZFS Storage Appliance Provider for Volume Shadow Copy Service Software, use RDM in physical compatibility mode.

    Note: Sun ZFS Storage Appliance Provider for Volume Shadow Copy Service Software (Microsoft Visual SourceSafe [VSS] Plug-in for Sun ZFS Storage Appliance) is not supported in a virtualized environment (VMware) using the Fibre Channel or NFS protocols; only iSCSI using Microsoft iSCSI initiator software is supported.

  • To improve network performance, use a VMXNET3 network adapter.
  • Install the VMware Client Tools. For more information on these tools and to install them, see Installing and Configuring VMware Tools.
  • When working with the Microsoft Windows platform, ensure that you have the latest service pack as well as all recommended patches installed.
  • Ensure that your virtual machine is working with the right partition alignment.
  • Work with a central swap datastore for all virtual machines. By default, VMware creates a virtual swap file that usually is equal to the amount of memory allocated to each virtual machine. Reallocate the virtual machine swap file to a central VMware datastore.

    To configure the central swap datastore, select ESXi5.1 in the VMware vSphere 5.x client. Select the Configuration tab, select Virtual Machine Swapfile Location, and then select Edit. Select the vswap datastore that was previously configured for this purpose, as shown in Figure 2.

    Figure 2

    Figure 2. VMware ESXi5 host swap file configuration

    Right-click the virtual machine, which will have the swap file relocated to a different datastore. Select Options and Swapfile Location, and choose Store in the host's swapfile datastore, as shown in Figure 3.

    Figure 3

    Figure 3. VMware virtual machine swap file configuration

  • As a best practice to achieve better performance for virtualized applications as well as to enable easier management of your virtual environment, work with a multi-pool design with multiple datastore repositories in VMware vSphere 5.x. Figure 4 shows the high-level view of a virtual machine layout with a multi-pool design.

    Figure 4

    Figure 4. Recommended data layout for a VMware virtual machine

    In this approach, the virtual machines are deployed in multiple datastore repositories, each with a different configuration. The example in Figure 4 shows a single virtual machine configured with three different datastores. The first datastore is configured with a 64k database record size and is designed to host virtual machines' operating system disk images. The second datastore is configured with a 32k database record size and is designed to host all the binaries for the virtualized applications. The third datastore is configured with a 64k database record size and is designed as a central swap area for all virtual machines.

    The example in Figure 5 shows a layout for a Microsoft Exchange Server that can be used for production environments. The layout consists of four different VMware datastores. The Exchange Server has been configured with 100GB of disk space for the operating system virtual disk, attached to eight 800GB RDM LUNs for the Exchange mail database and eight 150GB RDM LUNs for the mail-log virtual disk.

    Figure 5

    Figure 5. Data layout for a Microsoft Exchange virtual machine

Using a VMware Linked Clone

Linked clone is a technology provided by VMware for cloning virtual machines. This technology allows multiple virtual machines to share virtual disks with a parent image. Using a linked clone improves the storage efficiency and also the performance of cloning operations.

Note: Linked clone technology is available only through PowerShell or PowerCLI scripts, not through the VMware vCenter GUI.

To work with linked clone technology, use the following steps:

  1. Use the linked clone script shown in Listing 1.
  2. Before running the script, create a snapshot of the virtual machine for which you want to create the linked clone or clones.
  3. Edit the options highlighted in bold in Listing 1 to best fit your production environment. The options are the VMware vCenter host name, the virtual machine name for which you want to have the linked clones, the number of clones, and the total number of concurrent clone operations.
  4. Copy the contents of the script and save it with the extension .ps1; then open PowerCLI and execute the script.

At this point, you will be asked to enter the username and password of your VMware vCenter server. After the credentials have been validated, the linked clone operation will start and you will see a screen similar to the one shown in Figure 6.

Note: This operation is not supported with virtual disks in independent mode or raw device mappings in physical compatibility mode.

$VMHost="VCenter host name" 

Add-PSSnapin VMware.VimAutomation.Core # Add PowerCLI cmdlets.

#Open the Connection to the vCenter Server
Connect-VIServer -Server $VMHost

#Get the VM that you want to clone
$VMs = "Windows 2008 R2"

$vm = Get-VM "Windows 2008 R2" | Get-View
$clonePrefix = "linked_clone_"
$numClones = 100
$concurrentClones = 20

$cloneFolder = $vm.parent
$cloneSpec = new-object Vmware.Vim.VirtualMachineCloneSpec
$cloneSpec.Snapshot = $vm.Snapshot.CurrentSnapshot
$cloneSpec.Location = new-object Vmware.Vim.VirtualMachineRelocateSpec
$cloneSpec.Location.DiskMoveType =
[Vmware.Vim.VirtualMachineRelocateDiskMoveOptions]::createNewChildDiskBacking

#This option is available to power on each clone immediately after it is created:
$cloneSpec.powerOn = $true

$i = 1
while ($i -le $numClones) {
$taskViewArray = @()
foreach ($j in 1..$concurrentClones) {
$taskViewArray += $vm.CloneVM_Task( $cloneFolder, $clonePrefix+$i, $cloneSpec )
$i++
}
$taskArray = $taskViewArray | Get-VIObjectByVIView
Wait-Task $taskArray
}

Listing 1

Figure 6 shows the PowerCLI screen during execution of the linked clone script.

Figure 6

Figure 6. VMware linked clone script execution

See Also

Refer to the following websites for further information on testing results for Oracle ZFS Storage Appliance:

Also see the following documentation and websites:

About the Author

Anderson Souza is a virtualization senior software engineer in Oracle's Application Integration Engineering group. He joined Oracle in 2012, bringing more than 14 years of technology industry, systems engineering, and virtualization expertise. Anderson has a Bachelor of Science in Computer Networking, a master's degree in Telecommunication Systems/Network Engineering, and also an MBA with a concentration in project management.

Revision 1.0, 07/09/2013

Follow us:
Blog | Facebook | Twitter | YouTube