Recommendations for Fibre Channel Protocol

Best Practices for Oracle ZFS Storage Appliance and VMware vSphere 5.x: Part 4

by Anderson Souza

This article describes how to configure the Fibre Channel protocol for VMware vSphere 5.x with Oracle ZFS Storage Appliance.


Published July 2013


This article is Part 4 of a seven-part series that provides best practices and recommendations for configuring VMware vSphere 5.x with Oracle ZFS Storage Appliance to reach optimal I/O performance and throughput. The best practices and recommendations highlight configuration and tuning options for Fibre Channel, NFS, and iSCSI protocols.

Want to comment on this article? Post the link on Facebook's OTN Garage page.  Have a similar article to share? Bring it up on Facebook or Twitter and let's discuss.

The series also includes recommendations for the correct design of network infrastructure for VMware cluster and multi-pool configurations, as well as the recommended data layout for virtual machines. In addition, the series demonstrates the use of VMware linked clone technology with Oracle ZFS Storage Appliance.

All the articles in this series can be found here:

Note: For a white paper on this topic, see the Sun NAS Storage Documentation page.

The Oracle ZFS Storage Appliance product line combines industry-leading Oracle integration, management simplicity, and performance with an innovative storage architecture and unparalleled ease of deployment and use. For more information, see the Oracle ZFS Storage Appliance Website and the resources listed in the "See Also" section at the end of this article.

Note: References to Sun ZFS Storage Appliance, Sun ZFS Storage 7000, and ZFS Storage Appliance all refer to the same family of Oracle ZFS Storage Appliances.

Best Practices and Recommendations

Follow these best practices and recommendations when working with Fibre Channel protocol and VMware vSphere 5.x.

  • Update the Fibre Channel host bus adapter's (HBA's) firmware and drivers to their latest version and also ensure that the HBA is on the VMware HCL.
  • Ensure you have only one VMware virtual machine file system (VMFS) volume per LUN.
  • For raw devices, use raw device mapping (RDM).
  • Work with at least two Fibre Channel switches and one dual-port 8 Gb/sec HBA per Oracle ZFS Storage Appliance controller and VMware ESXi5.x host.
  • Ensure that your storage area network (SAN) has been designed for high availability and load balance without critical points of failure. See Figure 1.
Figure 1

Figure 1. Oracle ZFS Storage Appliance and VMware vSphere 5.x Fibre Channel environment

Changing the Default Storage Array, Path Selection, and Round-Robin I/O Operation Limit

When working with Fibre Channel protocol with VMware vSphere 5.x and Oracle ZFS Storage Appliance, change the default storage array type as well as the path selection policy and the round robin I/O operation limit prior to putting the servers into production. Follow the steps shown in the next several code examples to perform this change.

For changing the round-robin I/O operation limit, use the ESXi command shown in Listing 1. Identify all Oracle ZFS Storage Appliance disks that will be utilized by your virtualized server.

# esxcli storage nmp device list | egrep -i "SUN Fibre Channel Disk"

   Device Display Name: SUN Fibre Channel Disk (naa.600144f0c36f708b0000509aa8780005)
   Device Display Name: SUN Fibre Channel Disk (naa.600144f0c36f708b0000509aa94f000b)
   Device Display Name: SUN Fibre Channel Disk (naa.600144f0c36f708b0000509aa8ff0009)
   Device Display Name: SUN Fibre Channel Disk (naa.600144f0c36f708b0000509aab40000d)
   Device Display Name: SUN Fibre Channel Disk (naa.600144f0c36f708b0000509aa8d70008)
   Device Display Name: SUN Fibre Channel Disk (naa.600144f0c36f708b0000509aa8930006)
   Device Display Name: SUN Fibre Channel Disk (naa.600144f0c36f708b0000509aa8b50007)
   Device Display Name: SUN Fibre Channel Disk (naa.600144f0c36f708b0000509aaa77000c)
   Device Display Name: SUN Fibre Channel Disk (naa.600144f0c36f708b0000509aa92a000a)

Listing 1

Change the storage array type from VMW_PSP_RR to VMW_SATP_ALUA and change the path selection policy from VMW_PSP_MRU to VMW_PSP_RR, as shown in Listing 2:

# esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_ALUA
# esxcli storage nmp device list
naa.600144f0c36f708b0000509aa92a000a
   Device Display Name: SUN Fibre Channel Disk (naa.600144f0c36f708b0000509aa92a000a)
   Storage Array Type: VMW_SATP_ALUA
   Storage Array Type Device Config: {implicit_support=on;explicit_support=off; explicit_allow=on;alua_followover=on;{TPG_id=0,TPG_state=AO}}
   Path Selection Policy: VMW_PSP_MRU
   Path Selection Policy Device Config: Current Path=vmhba7:C0:T0:L6
   Path Selection Policy Device Custom Config:
   Working Paths: vmhba7:C0:T0:L6

Listing 2

The example in Listing 3 shows the command for capturing only ZFS Fibre Channel disks as well as changing the path selection policy. If needed, adjust the following command for your particular environment.

# esxcli storage nmp device list | egrep -i "SUN Fibre Channel Disk" | awk '{ print $8 }' | cut -c 2-37

naa.600144f0c36f708b0000509aa8780005
naa.600144f0c36f708b0000509aa94f000b
naa.600144f0c36f708b0000509aa8ff0009
naa.600144f0c36f708b0000509aab40000d
naa.600144f0c36f708b0000509aa8d70008
naa.600144f0c36f708b0000509aa8930006
naa.600144f0c36f708b0000509aa8b50007
naa.600144f0c36f708b0000509aaa77000c
naa.600144f0c36f708b0000509aa92a000a

Listing 3

Before performing changes, ensure that you are not using round-robin path selection, as shown in Listing4.

# for a in `esxcli storage nmp device list | egrep -i "SUN Fibre Channel Disk" | awk '{ print $8 }' | cut -c 2-37`
> do
> esxcli storage nmp psp roundrobin deviceconfig get -d $a
> done

Device naa.600144f0c36f708b0000509aa8780005 Does not use the Round Robin path selection policy.
Device naa.600144f0c36f708b0000509aa94f000b Does not use the Round Robin path selection policy.
Device naa.600144f0c36f708b0000509aa8ff0009 Does not use the Round Robin path selection policy.
Device naa.600144f0c36f708b0000509aab40000d Does not use the Round Robin path selection policy.
Device naa.600144f0c36f708b0000509aa8d70008 Does not use the Round Robin path selection policy.
Device naa.600144f0c36f708b0000509aa8930006 Does not use the Round Robin path selection policy.
Device naa.600144f0c36f708b0000509aa8b50007 Does not use the Round Robin path selection policy.
Device naa.600144f0c36f708b0000509aaa77000c Does not use the Round Robin path selection policy.
Device naa.600144f0c36f708b0000509aa92a000a Does not use the Round Robin path selection policy.

Listing 4

Run the following commands to change the path selection policy from VMW_PSP_MRU to VMW_PSP_RR:

~ # for a in `esxcli storage nmp device list | egrep -i "SUN Fibre Channel Disk" | awk '{ print $8 }' | cut -c 2-37`
> do
> esxcli storage nmp device set -d $a --psp=VMW_PSP_RR
> done

Run the command shown in Listing 5 to ensure that the new path selection policy has been updated:

~ # esxcli storage nmp device list
naa.600144f0c36f708b0000509aa92a000a
   Device Display Name: SUN Fibre Channel Disk (naa.600144f0c36f708b0000509aa92a000a)
   Storage Array Type: VMW_SATP_ALUA
   Storage Array Type Device Config: {implicit_support=on;explicit_support=off; explicit_allow=on;alua_followover=on;{TPG_id=0,TPG_state=AO}}
   Path Selection Policy: VMW_PSP_RR
   Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=1: NumIOsPending=0,numBytesPending=0}
   Path Selection Policy Device Custom Config:
   Working Paths: vmhba6:C0:T0:L6, vmhba7:C0:T0:L6

Listing 5

Change the I/O operation limit value to 1 and also change the type of the round-robin path switching to iops for all Fibre Channel disks on the Oracle ZFS Storage Appliance. List the device configuration before changing.

~ # esxcli storage nmp psp roundrobin deviceconfig get -d naa.600144f0c36f708b0000509aa92a000a
   Byte Limit: 10485760
   Device: naa.600144f0c36f708b0000509aa92a000a
   IOOperation Limit: 1000
   Limit Type: Default
   Use Active Unoptimized Paths: false

Then perform the configuration:

# for a in `esxcli storage nmp device list | egrep -i "SUN Fibre Channel Disk" | awk '{ print $8 }' | cut -c 2-37`
> do
> esxcli storage nmp psp roundrobin deviceconfig set -d $a -I 1 -t iops
> done

Run the commands shown in Listing 6 to ensure that the new values for operation limit and round-robin path switching have been updated:

# for a in `esxcli storage nmp device list | egrep -i "SUN Fibre Channel Disk" | awk '{ print $8 }' | cut -c 2-37`
> do
> esxcli storage nmp psp roundrobin deviceconfig get -d $a
> done

Device: naa.600144f0c36f708b0000509aa92a000a
IOOperation Limit: 1
Limit Type: Iops
Use Active Unoptimized Paths: false

Listing 6

To check the same information on the VMware vSphere 5.x client, go to the Configuration tab, select Storage adapters, and click the VM HBA that is attached with your Oracle ZFS Storage Appliance. Then right-click the disk that you wish to validate for configuration, and select Manage Paths, as seen in Figure 2. Figure 3 shows the result.

Figure 2

Figure 2. Managing VMware LUN paths shown in VMware vSphere 5.x client

Figure 3

Figure 3. VMware path selection and storage array type overview shown in VMware vSphere 5.x client

Changing Queue Depth—QLogic and Emulex HBAs

As a best practice for VMware vSphere 5.x and Oracle ZFS Storage Appliance, perform the following steps to adjust the queue depth option for all HBAs attached with the system.

  1. Identify which HBA module is currently loaded on the VMware hypervisor using the following commands.

    For QLogic HBAs, run:

    # esxcli system module list | grep qla*
    qla2xxx                             true        true 
    

    For Emulex HBAs, run:

    # esxcli system module list | grep lpfc*
    

    Note: The following example uses a QLogic HBA (module qla2xxx).

  2. Use the following commands to set a new queue depth value.

    For QLogic HBAs, run:

    # esxcli system module parameters set -p ql2xmaxqdepth=64 -m qla2xxx
    

    For Emulex HBAs, run:

    # esxcli system module parameters set -p lpfc0_lun_queue_depth=64 -m lpfc820
    
  3. Reboot your host and run the following command to confirm that the new queue depth value has been applied.

    # esxcli system module parameters list -m qla2xxx
    

    The following is the output for QLogic HBAs:

    Name              Type  Value  Description 
    ----------------  ----  -----  ------------------------------------------
    ql2xmaxqdepth     int   64     Maximum queue depth to report for target devices.
    

See Also

Refer to the following websites for further information on testing results for Oracle ZFS Storage Appliance:

Also see the following documentation and websites:

About the Author

Anderson Souza is a virtualization senior software engineer in Oracle's Application Integration Engineering group. He joined Oracle in 2012, bringing more than 14 years of technology industry, systems engineering, and virtualization expertise. Anderson has a Bachelor of Science in Computer Networking, a master's degree in Telecommunication Systems/Network Engineering, and also an MBA with a concentration in project management.

Revision 1.0, 07/02/2013

Follow us:
Blog | Facebook | Twitter | YouTube