Recommendations for IP Network Infrastructure

Best Practices for Oracle ZFS Storage Appliance and VMware vSphere 5.x: Part 2

by Anderson Souza

This article describes how to configure the IP network infrastructure for VMware vSphere 5.x with Oracle ZFS Storage Appliance.


Published July 2013


This article is Part 2 of a seven-part series that provides best practices and recommendations for configuring VMware vSphere 5.x with Oracle ZFS Storage Appliance to reach optimal I/O performance and throughput. The best practices and recommendations highlight configuration and tuning options for Fibre Channel, NFS, and iSCSI protocols.

Want to comment on this article? Post the link on Facebook's OTN Garage page.  Have a similar article to share? Bring it up on Facebook or Twitter and let's discuss.

The series also includes recommendations for the correct design of network infrastructure for VMware cluster and multi-pool configurations, as well as the recommended data layout for virtual machines. In addition, the series demonstrates the use of VMware linked clone technology with Oracle ZFS Storage Appliance.

All the articles in this series can be found here:

Note: For a white paper on this topic, see the Sun NAS Storage Documentation page.

The Oracle ZFS Storage Appliance product line combines industry-leading Oracle integration, management simplicity, and performance with an innovative storage architecture and unparalleled ease of deployment and use. For more information, see the Oracle ZFS Storage Appliance Website and the resources listed in the "See Also" section at the end of this article.

Note: References to Sun ZFS Storage Appliance, Sun ZFS Storage 7000, and ZFS Storage Appliance all refer to the same family of Oracle ZFS Storage Appliances.

Example IP Network Infrastructures

The example that follows employs two Cisco Nexus 5010 10GbE IP switches with all interfaces working at 10GbE speed in full-duplex mode. Also, the IP switches' ports connected to Oracle ZFS Storage Appliance are grouped in a Cisco EtherChannel working with port groups configuration 9000 MTU (jumbo frame) and 802.3ad Link Aggregation Control Protocol (LACP). On VMware, the default NIC teaming configuration is using active and standby interfaces mode.

Note: If you are working with more than one physical network card that is a member of a port-channel group, use the VMware NIC teaming configuration shown in Figure 1, which includes the following:

  • Load Balancing: Route based on IP hash
  • Network Failover Detection: Link status only
  • Notify Switches: Yes
  • Failback: Yes
Figure 1

Figure 1. VMware vSphere—NIC teaming configuration

Note: VMware has added the LACP feature on VMware ESXi5.1 hosts utilizing VMware vSphere Distributed Switch (VDS). However, the VDS configuration is beyond the scope of this paper and the featured examples do not use LACP with VMware. LACP configuration has been enabled only on port-group 100 and Oracle ZFS Storage Appliance 10GbE interfaces.

On the VMware side, work with at least 4 x 10GbE interfaces and two virtual switches. Configure two physical 10GbE for the management and virtual machine network, and also two 10GbE for NFS and vMotion operations. All 10GbE must be configured with 9000 MTU. Figures 2, 3, and 4 reflect these settings.

Note: Use of VDS in combination with VMware direct I/O technology and pass-through-capable hardware is recommended. Performance gains have been reported when using this combination of technologies. However, the examples do not use pass-through-capable hardware, and so configuration of these features is beyond the scope of this paper. For more information about this technology, refer to VMware's official documentation.

Figures 2, 3, and 4 show three different network environments that are supported with Oracle ZFS Storage Appliance.

Figure 2

Figure 2. Example 1—Oracle ZFS Storage Appliance and VMware ESXi5.1 network infrastructure for NFS

Figure 3

Figure 3. Example 2—Oracle ZFS Storage Appliance and VMware ESXi5.1 network infrastructure for NFS

Figure 4

Figure 4. Example 3—Oracle ZFS Storage Appliance and VMware ESXi5.1 network infrastructure for NFS

The next sections show how to configure port-channels as well as LACP and 9000 MTU jumbo frames on a Cisco NEXUS 5010 switch. Before starting those tasks, ensure that your IP switches have the LACP feature enabled. To do this, open an SSH session with your switches and run the commands described below.

Note: The tasks described in the rest of this article must be performed on all IP switch members of this solution. The example reflects two physical Cisco Nexus IP switches, so perform the Cisco EtherChannel, LACP, and jumbo frame configuration for both switches.

First, run the command shown in Listing 1.

nexus_ip_sw_01# show feature
Feature Name          Instance  State
--------------------  --------  --------
cimserver             1         disabled
fabric-binding        1         disabled
fc-port-security      1         disabled
fcoe                  1         enabled
fcsp                  1         disabled
fex                   1         disabled
fport-channel-trunk   1         disabled
http-server           1         enabled
interface-vlan        1         disabled
lacp                  1         disabled
lldp                  1         enabled
npiv                  1         enabled
npv                   1         disabled
port_track            1         disabled
private-vlan          1         disabled
sshServer             1         disabled
tacacs                1         disabled
telnetServer          1         enabled
udld                  1         disabled
vpc                   1         disabled
vtp                   1         disabled

Listing 1

If you do not have the LACP feature enabled, use the commands shown in Listing 2 to enable this feature.

nexus_ip_sw_01# configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
nexus_ip_sw_01 (config)# feature lacp
nexus_ip_sw_01 (config)# end

nexus_ip_sw_01# show feature
Feature Name          Instance  State
--------------------  --------  --------
cimserver             1         disabled
fabric-binding        1         disabled
fc-port-security      1         disabled
fcoe                  1         enabled
fcsp                  1         disabled
fex                   1         disabled
fport-channel-trunk   1         disabled
http-server           1         enabled
interface-vlan        1         disabled
lacp                  1         enabled
lldp                  1         enabled
npiv                  1         enabled
npv                   1         disabled
port_track            1         disabled
private-vlan          1         disabled
sshServer             1         disabled
tacacs                1         disabled
telnetServer          1         enabled
udld                  1         disabled
vpc                   1         disabled
vtp                   1         disabled

Listing 2

Creating a Port-Channel

Run the commands shown in Listing 3 to create the port-channel 100.

nexus_ip_sw_01# configure terminal
nexus_ip_sw_01 (config)# interface port-channel 100
nexus_ip_sw_01 (config-if)# interface ethernet 1/9-10
nexus_ip_sw_01 (config-if-range)# channel-group 100 mode active

nexus_ip_sw_01# show interface port-channel 100
port-channel 100 is down (No operational members)
  Hardware: Port-Channel, address: 0000.0000.0000 (bia 0000.0000.0000)
  MTU 1500 bytes, BW 100000 Kbit, DLY 10 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA
  Port mode is access
  auto-duplex, auto-speed
  Beacon is turned off
  Input flow-control is off, output flow-control is off
  Switchport monitor is off
  No members
  Last clearing of "show interface" counters never
  0 seconds input rate 0 bits/sec, 0 packets/sec
  0 seconds output rate 0 bits/sec, 0 packets/sec
  Load-Interval #2: 0 seconds
    input rate 0 bps, 0 pps; output rate 0 bps, 0 pps
  RX
    0 unicast packets  0 multicast packets  0 broadcast packets
    0 input packets  0 bytes
    0 jumbo packets  0 storm suppression packets
    0 runts  0 giants  0 CRC  0 no buffer
    0 input error  0 short frame  0 overrun   0 underrun  0 ignored
    0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
    0 input with dribble  0 input discard
    0 Rx pause
  TX
    0 unicast packets  0 multicast packets  0 broadcast packets
    0 output packets  0 bytes
    0 jumbo packets
    0 output errors  0 collision  0 deferred  0 late collision
    0 lost carrier  0 no carrier  0 babble
    0 Tx pause
  0 interface resets

Listing 3

Now that port-channel 100 has been created, you need to add network interfaces to this channel group. To accomplish this, run the commands shown in Listing 4.

nexus_ip_sw_01# configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
nexus_ip_sw_01 (config)# interface ethernet 1/9-10
nexus_ip_sw_01 (config-if-range)# channel-group 100
nexus_ip_sw_01 (config-if-range)# end

nexus_ip_sw_01# show port-channel summary
Flags:  D - Down        P - Up in port-channel (members)
        I - Individual  H - Hot-standby (LACP only)
        s - Suspended   r - Module-removed
        S - Switched    R - Routed
        U - Up (port-channel)
--------------------------------------------------------------------------------
Group Port-       Type     Protocol  Member Ports
      Channel
--------------------------------------------------------------------------------
100   Po100(SU)   Eth      LACP      Eth1/9(P)    Eth1/10(P)

Listing 4

Enabling Port-Channel Load Balancing

The next task is to enable the port-channel load balancing feature.

Run the commands shown in Listing 5.

nexus_ip_sw_01# configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
nexus_ip_sw_01 (config)# port-channel load-balance ethernet source-dest-ip
nexus_ip_sw_01 (config)# show port-channel load-balance

Port Channel Load-Balancing Configuration:
System: source-dest-ip

Port Channel Load-Balancing Addresses Used Per-Protocol:
Non-IP: source-dest-mac
IP: source-dest-ip source-dest-mac

Listing 5

The Cisco EtherChannel configuration is now completed and the network interfaces are grouped into a channel-group 100 using LACP protocol. To ensure that the port-channel is up and running as well as utilizing LACP protocol and load balance features, run the commands shown in Listing 6.

nexus_ip_sw_01# show port-channel summary
Flags:  D - Down        P - Up in port-channel (members)
        I - Individual  H - Hot-standby (LACP only)
        s - Suspended   r - Module-removed
        S - Switched    R - Routed
        U - Up (port-channel)
--------------------------------------------------------------------------------
Group Port-       Type     Protocol  Member Ports
      Channel
--------------------------------------------------------------------------------
100   Po100(SU)   Eth      LACP      Eth1/9(P)    Eth1/10(P)


nexus_ip_sw_01# show port-channel usage
Total 1 port-channel numbers used
============================================
Used  :   100
Unused:   1 - 99 , 101 - 4096
          (some numbers may be in use by SAN port channels)


nexus_ip_sw_01# show port-channel traffic
ChanId      Port Rx-Ucst Tx-Ucst Rx-Mcst Tx-Mcst Rx-Bcst Tx-Bcst
------ --------- ------- ------- ------- ------- ------- -------
   100    Eth1/9  48.22%  94.51%  57.80%  37.29%  32.35%  51.93%
   100   Eth1/10  51.77%   5.48%  42.19%  62.70%  67.64%  48.06%

Listing 6

Save your configuration by running the following command:

nexus_ip_sw_01# copy running-config startup-config
[########################################] 100%

Enabling Jumbo Frame 9000 MTU

Based on Cisco official documentation, the Cisco Nexus 5000 Series switch supports only system-level MTU, which means the MTU attribute cannot be changed on an individual-port basis. However, you can still modify MTU size by setting QoS policy and class maps.

To enable jumbo frame for the whole switch, run the commands shown in Listing 7:

nexus_ip_sw_01# configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
nexus_ip_sw_01 (config)# policy-map type network-qos jumbo
nexus_ip_sw_01 (config-pmap-nq)# class type network-qos class-default
nexus_ip_sw_01 (config-pmap-nq-c)# mtu 9000
nexus_ip_sw_01 (config-pmap-nq-c)# end
nexus_ip_sw_01# configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
nexus_ip_sw_01 (config)# system qos
nexus_ip_sw_01 (config-sys-qos)# service-policy type network-qos jumbo
nexus_ip_sw_01 (config-sys-qos)# end

Listing 7

Check your configuration to ensure the IP switch Ethernet interfaces are carrying traffic with jumbo MTU. Perform the commands shown in Listing 8 to validate that information:

nexus_ip_sw_01# show interface ethernet 1/9 counters detailed
Ethernet1/9
  Rx Packets:                                  1503095493
  Rx Unicast Packets:                          1503070519
  Rx Multicast Packets:                             14499
  Rx Broadcast Packets:                             10475
  Rx Jumbo Packets:                                210539
  Rx Bytes:                                  919451945239
  Rx Packets from 0 to 64 bytes:                823994390
  Rx Packets from 65 to 127 bytes:               60266586
  Rx Packets from 128 to 255 bytes:              41809329
  Rx Packets from 256 to 511 bytes:               7941051
  Rx Packets from 512 to 1023 bytes:              7991931
  Rx Packets from 1024 to 1518 bytes:           561092203
  Tx Packets:                                 59232316116
  Tx Unicast Packets:                         59196278214
  Tx Multicast Packets:                          14618899
  Tx Broadcast Packets:                          21418053
  Tx Jumbo Packets:                                251642
  Tx Bytes:                                70304189240915
  Tx Packets from 0 to 64 bytes:                 54643893
  Tx Packets from 65 to 127 bytes:            11529933522
  Tx Packets from 128 to 255 bytes:            1166365207
  Tx Packets from 256 to 511 bytes:             460593642
  Tx Packets from 512 to 1023 bytes:            816852512
  Tx Packets from 1024 to 1518 bytes:         45203675698
  Tx Trunk Packets:                               5045352
  Output Errors:                                        3

Listing 8

Note: Cisco Nexus 5000 series switches do not support packet fragmentation, so an incorrect MTU configuration might result in packets being truncated. Ensure that your network interfaces have the right duplex and speed configuration, and confirm that members of Cisco EtherChannel have the LACP feature enabled and are correctly configured.

Refer to the following URL for additional information about the Cisco Nexus IP switch:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli_rel_4_0_1a/EtherChannel.html

See Also

Refer to the following websites for further information on testing results for Oracle ZFS Storage Appliance:

Also see the following documentation and websites:

About the Author

Anderson Souza is a virtualization senior software engineer in Oracle's Application Integration Engineering group. He joined Oracle in 2012, bringing more than 14 years of technology industry, systems engineering, and virtualization expertise. Anderson has a Bachelor of Science in Computer Networking, a master's degree in Telecommunication Systems/Network Engineering, and also an MBA with a concentration in project management.

Revision 1.0, 07/01/2013

Follow us:
Blog | Facebook | Twitter | YouTube