Implementing Oracle Virtual Networking and Oracle VM Server for SPARC

by Satinder Nijjar

This article demonstrates how to implement Oracle Virtual Networking virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs) on Oracle VM Server for SPARC. Then it shows how to use the Oracle VM Server for SPARC framework to use these vNICs and storage presented over the vHBAs to the guest domains with high availability.


Published September 2013


Want to comment on this article? Post the link on Facebook's OTN Garage page.  Have a similar article to share? Bring it up on Facebook or Twitter and let's discuss.
Table of Contents
Introduction
Prerequisites
Oracle VM Server for SPARC Domains and Roles
Service Domain Model
Steps for Building Separate I/O Domains
Failover/Failback Testing
See Also
About the Author

Introduction

Oracle Virtual Networking virtualizes the data center infrastructure and enables connections from any virtual machine (VM) or server to other VMs, servers, network resources, and storage devices. In this article, you will learn how to configure Oracle VM for SPARC with Oracle Virtual Networking.

This article demonstrates how you can create a resilient and redundant I/O layer using Oracle Virtual Networking in conjunction with the Oracle VM Server for SPARC framework on which guest domains can be deployed. For network redundancy, we will us IPMP in the guest domain. For FC storage redundancy, we will use multipathing group (mpgroups) to present multiple paths for a Fibre Channel storage LUN as a simple disk to the guest domain.

Note: This article assumes you have a basic understanding of Oracle Virtual Networking and its components and you are able to provision vNICs and vHBAs. It also assumes you have a working knowledge of Oracle VM Server for SPARC and its components.

Prerequisites

Ensure you have the following installed. You will need one or more Oracle VM servers and Oracle VM Manager.

  • Oracle's Xsigo Operating System (XgOS) version 3.9.1:

    Download XgOS from My Oracle Support. It can be found under the Patches and Updates tab by searching based on the "Product or Family," where "Product" is "Oracle Virtual Networking" and "Release" is "Oracle Fabric Interconnect 3.9.0."

  • Oracle Solaris 11.1 SRU 7.5
  • Oracle VM Server for SPARC
  • Oracle Virtual Networking drivers for Oracle Solaris 11.1:

    Download Oracle Virtual Networking drivers for Oracle Solaris (SPARC) from My Oracle Support. They can be found under the Patches and Updates tab by searching based on "Product or Family," where Product is "Oracle Virtual Networking" and "Release" is "Oracle Virtual Networking Drivers 5.1.1." Select the Include all products in a family checkbox. Download the "Oracle Solaris on SPARC" drivers.

  • Oracle Fabric Manager 4.1.0, which has a Java dependency for JRE 1.6; see the Oracle Fabric Manager release notes for a complete set of system requirements.

    Download Oracle Fabric Manager from My Oracle Support. It can be found under the Patches and Updates tab by searching based on "Product or Family," where Product is "Oracle Virtual Networking" and "Release" is "Oracle Fabric Manager 4.1." Select the Include all products in a family checkbox.

Oracle VM Server for SPARC Domains and Roles

Oracle VM Server for SPARC uses the following types of domains:

  • Control domain: The management control point for virtualization of the server, which is used to configure domains and manage resources. It is the first domain to boot on a power-up, it is an I/O domain, and it is usually a service domain as well. There can be only one control domain.
  • I/O domain: A domain that has been assigned physical I/O devices: a PCIe root complex, a PCI device, or an SR-IOV (Single-Root I/O Virtualization) function. It has native performance and functionality for the devices it owns, unmediated by any virtualization layer. There can be multiple I/O domains.
  • Service domain: A domain that provides virtual network and disk devices to guest domains. There can be multiple service domains, although in practice there is usually one or sometimes two or more for redundancy. A service domain is always an I/O domain, because it must own physical I/O resources in order to virtualize them for guest domains.
  • Guest domain: A domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. There usually are multiple guest domains in a single system.

Note: Domain roles may be combined; for example, a control domain can also be an I/O domain and a service domain. Also, a service domain with no physical I/O could be used to provide a virtual switch for internal networking purposes, or it could be configured to run the virtual console service.

Service Domain Model

The service domain owns all the I/O and is allocated CPU and memory for its own purposes.

The I/O to the guest domains is provided as virtualized I/O via the service domain.

Figure 1

Figure 1

Recommended Deployment Model with Oracle Virtual Networking—1

Figure 2 shows the typical deployment model.

vNICs and vHBAs that are created using Oracle Fabric Manager and presented to a server running Oracle Solaris act and behave exactly like physical NICs and HBAs.

A single service domain owns all the vNICs and vHBAs and provides virtual devices to multiple guest domains where the applications are run.

Figure 2

Figure 2

Recommended Deployment Model with Oracle Virtual Networking—2

The redundant I/O domain model shown in Figure 3 requires Oracle Fabric Interconnect to be deployed in a high availability (HA) pair.

Add vNIC0 and vHBA0, which terminate onto one of the Oracle Fabric Interconnects.

Create an additional service domain that owns the Oracle Virtual Networking vNICs and vHBAs (vNIC1 and vHBA1), which terminate on the second Oracle Fabric Interconnect.

Ensure that vHBA0 and vHBA1 can access the same target and LUN (through Fibre Channel Zoning, for example).

Create a virtual disk server (VDS) in each of the service domains and present the virtual disk to the guest domain.

Similarly, create MPIO for network devices through the virtual switches (vNIC0 and vNIC1).

Figure 3

Figure 3

Steps for Building Separate I/O Domains

Prerequisites

Your system needs to have more than one PCIe bus and you need to have two HCA cards installed on two different PCIe buses.

You need to have a second local hard disk, which should be on a different SAS controller/PCIe bus than the primary domain boot disk.

By default, the primary control domain owns all the PCIe buses present on the system.

Steps

Verify that the primary domain owns more than one PCIe bus by running the following command:

# ldm list-io

If your primary domain is booted from the internal disk, make sure that you don't remove the PCIe bus that connects the internal boot disk and the management network ports for the primary domain. If you remove the wrong PCIe bus, a domain might not able to access the required devices and could become unusable.

Configure the Primary Domain

Run the following commands:

# ldm add-vds primary-vds0 primary
# ldm add-vcc port-range=5000-5100 primary-vcc0 primary
# ldm add-vsw net-dev=net# primary-vsw0 primary
# ldm set-vcpu 8 primary
# ldm start-reconf primary
# ldm set-memory 4g primary
# ldm add-config initial
# shutdown -y -g0 -i6 

You need to reboot the primary domain for the configuration changes to take effect as well as to free up the resources to be used by other logical domains.

Verify the primary domain configuration:

# ldm list
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  NORM  UPTIME
primary          active    -n-cv-   UART     8     4G      1.4%  0.3%   57m

Copy the OS ISO image to the primary/control domain. We need that ISO image to boot and install the guest domains.

Specify the device to be exported by the virtual disk server as a virtual disk to the guest domain:

# ldm add-vdsdev /export/sol-11_1-text-sparc.iso iso_vol@primary-vds0

Build the Secondary I/O Domain

The following command shows which PCI buses are mapped to what, In this case, I have four PCIe buses:

# ldm list-io
NAME                                 TYPE   BUS      DOMAIN   STATUS   
----                                 ----   ---      ------   ------   
pci_0                                BUS    pci_0    primary           
pci_1                                BUS    pci_1    primary           
pci_2                                BUS    pci_2    primary         
pci_3                                BUS    pci_3    primary

Note: In my setup, pci_0 has the internal boot disk and pci_1 has the HCA. I'm going to remove pci_2 and pci_3 from the primary domain. I have another HCA and internal hard disk connected to pci_2 and pci_3.

# ldm add-domain secondary  
# ldm add-vcpu 8 secondary
# ldm add-memory 4G secondary

Remove the pci_2 and pci_3 buses from the primary, save the configuration, and reboot the primary domain:

# ldm start-reconf primary
# ldm remove-io pci_2 primary
# ldm remove-io pci_3 primary
# ldm add-config secondary 
# shutdown -i6 -g0 -y

Once the system comes back after reboot, add pci_2 and pci_3 to the secondary domain:

# ldm stop-domain  secondary
# ldm add-io pci_2  secondary
# ldm add-io pci_3  secondary

We can now add the ISO image so we can boot and install the secondary guest domain:

# ldm add-vdisk os_iso  iso_vol@primary-vds0 secondary

Let's see how the hardware buses look. Listing 1 provides a snippet from the output:

# ldm list-io
NAME                              TYPE   BUS      DOMAIN   STATUS   
----                              ----   ---      ------   ------   
pci_0                             BUS    pci_0    primary           
pci_1                             BUS    pci_1    primary           
pci_2                             BUS    pci_2    secondary         
pci_3                             BUS    pci_3    secondary

# ldm bind-domain secondary 
# ldm start-domain secondary
# telnet localhost 5000

Listing 1

Because we are using two I/O domains, we need two virtual disk services:

# ldm add-vds secondary-vds0 secondary

Install the ORCL-ovn Host Drivers on Both I/O Domains

Make sure that both domains can see the HCAs:

# scanpci |grep -i mellanox
 Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE

Follow the Oracle Virtual Networking documentation to install the ORCL-ovn host drivers. Please install the drivers on both the primary and secondary I/O domains. You need to reboot both the I/O domains to finish the driver installation.

Once the drivers are installed successfully, go to the Oracle Fabric Interconnect and start configuring the I/O domains. Add one vHBA to both the primary and the secondary I/O domains. Add two vNICs to both the I/O domains.

Based on the back-end SAN storage you are using, add one or more LUNs (according to the number of guest domains you are planning to set up), and make sure that both the vHBAs (on the primary and secondary domains) can see that LUN.

Create the Guest Domain Using the Primary and Secondary Domains for Storage and Network Services

They key thing here is to mention the mpgroup while adding the VDS device to the guest domain.

The following is an example of creating a guest domain named lgd1.

# ldm add-domain ldg1 
# ldm add-vcpu 8 ldg1
# ldm add-memory 8G ldg1
# ldm set-variable auto-boot\?=false ldg1
# ldm add-vdisk vdisk_iso iso@primary-vds0 ldg1

It is time to map a disk or disks to the guest domain so that you can install the operating system. Both the primary and the secondary I/O domain have access to the very same disk paths through their dedicated vHBA on their own respective PCI_x buses or HCA.

# ldm add-vdsdev mpgroup=ldg1-mp /dev/dsk/c14t21000024FF46EA38d0s2 vol-ldg1@primary-vds0  
# ldm add-vdsdev mpgroup=ldg1-mp /dev/dsk/c6t21000024FF46EA39d0s2 vol-ldg1@secondary-vds0

Note: You can get the device name, /dev/dsk/cxxx, by using the format command on the primary and secondary domains.

Map the vdisk vol-ldg1@primary-vds0 service to the guest domain ldg1:

# ldm add-vdisk ldg1-mp vol-ldg1@primary-vds0 ldg1

You can now assign the network service from the switches being hosted from both I/O domains. You can create IPMP interfaces using vNICs from both the Oracle Fabric Interconnects on the I/O domains. Add the network to the guest domain lgd1. You can use any names; I used the names vnet-primary and vnet-secondary.

# ldm add-vnet vnet-primary primary-vsw0 ldg1 
# ldm add-vnet vnet-secondary secondary-vsw0 ldg1 

The guest domain now has all the required "stuff" to allow it to be bound and started.

# ldm bind ldg1
# ldm start ldg1
# ldm list
# telnet localhost 5001

Failover/Failback Testing

Test the storage the multipathing configuration.

The following tests were performed successfully while I/O operations (using dd) were running on guest domain ldg1.

  • Reboot the I/O card on the Oracle Fabric Interconnect on which the primary domain vHBA is terminated.
  • Reboot one of the Oracle Fabric Interconnects.
  • Disable the vHBA on the primary domain (vHBA0), and verify disk access.
  • Enable the vHBA on the primary domain (vHBA0), and verify disk access.
  • Disable the vHBA on the secondary domain (vHBA1), and verify disk access.
  • Enable the vHBA on the secondary domain (vHBA1), and verify disk access.
  • Disable the server profile for the primary domain, and then verify disk and network access.
  • Enable the server profile for the primary domain.
  • Disable the server profile for the secondary domain, and then verify disk and network access.
  • Enable the server profile for the secondary domain.
  • Reboot the primary domain.
  • Reboot the secondary domain.

You can see the failover messages shown in Listing 2 on the guest domain ldg:

Apr 24 14:59:42 ldom1 vdc: [ID 990228 kern.info] vdisk@0 is offline
Apr 24 14:59:43 ldom1 vdc: [ID 979497 kern.info] vdisk@0 access to service failed using ldc@3,1
Apr 24 14:59:45 ldom1 vdc: [ID 625787 kern.info] vdisk@0 is online using ldc@2,0

Apr 24 15:08:26 ldom1 vdc: [ID 990228 kern.info] vdisk@0 is offline
Apr 24 15:08:27 ldom1 vdc: [ID 979497 kern.info] vdisk@0 access to service failed using ldc@2,0
Apr 24 15:08:29 ldom1 vdc: [ID 625787 kern.info] vdisk@0 is online using ldc@3,1

Listing 2

See Also

Oracle Fabric Interconnect documentation

About the Author

Satinder Nijjar is a Principal Product Manager for Oracle Virtual Networking products and has 18 years of IT experience in industries ranging from financial services, retail, and healthcare to education. Satinder joined Oracle in 2012 as part of the Xsigo Systems acquisition.

Revision 1.0, 09/16/2013

Follow us:
Blog | Facebook | Twitter | YouTube