by Satinder Nijjar
Published September 2013
Oracle Virtual Networking virtualizes the data center infrastructure and enables connections from any virtual machine (VM) or server to other VMs, servers, network resources, and storage devices. In this article, you will learn how to configure Oracle VM for SPARC with Oracle Virtual Networking.
This article demonstrates how you can create a resilient and redundant I/O layer using Oracle Virtual Networking in conjunction with the Oracle VM Server for SPARC framework on which guest domains can be deployed. For network redundancy, we will us IPMP in the guest domain. For FC storage redundancy, we will use multipathing group (mpgroups) to present multiple paths for a Fibre Channel storage LUN as a simple disk to the guest domain.
Note: This article assumes you have a basic understanding of Oracle Virtual Networking and its components and you are able to provision vNICs and vHBAs. It also assumes you have a working knowledge of Oracle VM Server for SPARC and its components.
Ensure you have the following installed. You will need one or more Oracle VM servers and Oracle VM Manager.
Download XgOS from My Oracle Support. It can be found under the Patches and Updates tab by searching based on the "Product or Family," where "Product" is "Oracle Virtual Networking" and "Release" is "Oracle Fabric Interconnect 3.9.0."
Download Oracle Virtual Networking drivers for Oracle Solaris (SPARC) from My Oracle Support. They can be found under the Patches and Updates tab by searching based on "Product or Family," where Product is "Oracle Virtual Networking" and "Release" is "Oracle Virtual Networking Drivers 5.1.1." Select the Include all products in a family checkbox. Download the "Oracle Solaris on SPARC" drivers.
Download Oracle Fabric Manager from My Oracle Support. It can be found under the Patches and Updates tab by searching based on "Product or Family," where Product is "Oracle Virtual Networking" and "Release" is "Oracle Fabric Manager 4.1." Select the Include all products in a family checkbox.
Oracle VM Server for SPARC uses the following types of domains:
Note: Domain roles may be combined; for example, a control domain can also be an I/O domain and a service domain. Also, a service domain with no physical I/O could be used to provide a virtual switch for internal networking purposes, or it could be configured to run the virtual console service.
The service domain owns all the I/O and is allocated CPU and memory for its own purposes.
The I/O to the guest domains is provided as virtualized I/O via the service domain.
Figure 2 shows the typical deployment model.
vNICs and vHBAs that are created using Oracle Fabric Manager and presented to a server running Oracle Solaris act and behave exactly like physical NICs and HBAs.
A single service domain owns all the vNICs and vHBAs and provides virtual devices to multiple guest domains where the applications are run.
The redundant I/O domain model shown in Figure 3 requires Oracle Fabric Interconnect to be deployed in a high availability (HA) pair.
Add vNIC0 and vHBA0, which terminate onto one of the Oracle Fabric Interconnects.
Create an additional service domain that owns the Oracle Virtual Networking vNICs and vHBAs (vNIC1 and vHBA1), which terminate on the second Oracle Fabric Interconnect.
Ensure that vHBA0 and vHBA1 can access the same target and LUN (through Fibre Channel Zoning, for example).
Create a virtual disk server (VDS) in each of the service domains and present the virtual disk to the guest domain.
Similarly, create MPIO for network devices through the virtual switches (vNIC0 and vNIC1).
Your system needs to have more than one PCIe bus and you need to have two HCA cards installed on two different PCIe buses.
You need to have a second local hard disk, which should be on a different SAS controller/PCIe bus than the primary domain boot disk.
By default, the primary control domain owns all the PCIe buses present on the system.
Verify that the primary domain owns more than one PCIe bus by running the following command:
# ldm list-io
If your primary domain is booted from the internal disk, make sure that you don't remove the PCIe bus that connects the internal boot disk and the management network ports for the primary domain. If you remove the wrong PCIe bus, a domain might not able to access the required devices and could become unusable.
Run the following commands:
# ldm add-vds primary-vds0 primary # ldm add-vcc port-range=5000-5100 primary-vcc0 primary # ldm add-vsw net-dev=net# primary-vsw0 primary # ldm set-vcpu 8 primary # ldm start-reconf primary # ldm set-memory 4g primary # ldm add-config initial # shutdown -y -g0 -i6
You need to reboot the primary domain for the configuration changes to take effect as well as to free up the resources to be used by other logical domains.
Verify the primary domain configuration:
# ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME primary active -n-cv- UART 8 4G 1.4% 0.3% 57m
Copy the OS ISO image to the primary/control domain. We need that ISO image to boot and install the guest domains.
Specify the device to be exported by the virtual disk server as a virtual disk to the guest domain:
# ldm add-vdsdev /export/sol-11_1-text-sparc.iso iso_vol@primary-vds0
The following command shows which PCI buses are mapped to what, In this case, I have four PCIe buses:
# ldm list-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary
Note: In my setup,
pci_0 has the internal boot disk and
pci_1 has the HCA. I'm going to remove
pci_3 from the primary domain. I have another HCA and internal hard disk connected to
# ldm add-domain secondary # ldm add-vcpu 8 secondary # ldm add-memory 4G secondary
pci_3 buses from the primary, save the configuration, and reboot the primary domain:
# ldm start-reconf primary # ldm remove-io pci_2 primary # ldm remove-io pci_3 primary # ldm add-config secondary # shutdown -i6 -g0 -y
Once the system comes back after reboot, add
pci_3 to the secondary domain:
# ldm stop-domain secondary # ldm add-io pci_2 secondary # ldm add-io pci_3 secondary
We can now add the ISO image so we can boot and install the secondary guest domain:
# ldm add-vdisk os_iso iso_vol@primary-vds0 secondary
Let's see how the hardware buses look. Listing 1 provides a snippet from the output:
# ldm list-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 secondary pci_3 BUS pci_3 secondary # ldm bind-domain secondary # ldm start-domain secondary # telnet localhost 5000
Because we are using two I/O domains, we need two virtual disk services:
# ldm add-vds secondary-vds0 secondary
ORCL-ovnHost Drivers on Both I/O Domains
Make sure that both domains can see the HCAs:
# scanpci |grep -i mellanox Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE
Follow the Oracle Virtual Networking documentation to install the
ORCL-ovn host drivers. Please install the drivers on both the primary and secondary I/O domains. You need to reboot both the I/O domains to finish the driver installation.
Once the drivers are installed successfully, go to the Oracle Fabric Interconnect and start configuring the I/O domains. Add one vHBA to both the primary and the secondary I/O domains. Add two vNICs to both the I/O domains.
Based on the back-end SAN storage you are using, add one or more LUNs (according to the number of guest domains you are planning to set up), and make sure that both the vHBAs (on the primary and secondary domains) can see that LUN.
They key thing here is to mention the mpgroup while adding the VDS device to the guest domain.
The following is an example of creating a guest domain named
# ldm add-domain ldg1 # ldm add-vcpu 8 ldg1 # ldm add-memory 8G ldg1 # ldm set-variable auto-boot\?=false ldg1 # ldm add-vdisk vdisk_iso iso@primary-vds0 ldg1
It is time to map a disk or disks to the guest domain so that you can install the operating system. Both the primary and the secondary I/O domain have access to the very same disk paths through their dedicated vHBA on their own respective PCI_x buses or HCA.
# ldm add-vdsdev mpgroup=ldg1-mp /dev/dsk/c14t21000024FF46EA38d0s2 vol-ldg1@primary-vds0 # ldm add-vdsdev mpgroup=ldg1-mp /dev/dsk/c6t21000024FF46EA39d0s2 vol-ldg1@secondary-vds0
Note: You can get the device name,
/dev/dsk/cxxx, by using the
format command on the primary and secondary domains.
Map the vdisk
vol-ldg1@primary-vds0 service to the guest domain
# ldm add-vdisk ldg1-mp vol-ldg1@primary-vds0 ldg1
You can now assign the network service from the switches being hosted from both I/O domains. You can create IPMP interfaces using vNICs from both the Oracle Fabric Interconnects on the I/O domains. Add the network to the guest domain
lgd1. You can use any names; I used the names
# ldm add-vnet vnet-primary primary-vsw0 ldg1 # ldm add-vnet vnet-secondary secondary-vsw0 ldg1
The guest domain now has all the required "stuff" to allow it to be bound and started.
# ldm bind ldg1 # ldm start ldg1 # ldm list # telnet localhost 5001
Test the storage the multipathing configuration.
The following tests were performed successfully while I/O operations (using
dd) were running on guest domain
You can see the failover messages shown in Listing 2 on the guest domain
Apr 24 14:59:42 ldom1 vdc: [ID 990228 kern.info] vdisk@0 is offline Apr 24 14:59:43 ldom1 vdc: [ID 979497 kern.info] vdisk@0 access to service failed using ldc@3,1 Apr 24 14:59:45 ldom1 vdc: [ID 625787 kern.info] vdisk@0 is online using ldc@2,0 Apr 24 15:08:26 ldom1 vdc: [ID 990228 kern.info] vdisk@0 is offline Apr 24 15:08:27 ldom1 vdc: [ID 979497 kern.info] vdisk@0 access to service failed using ldc@2,0 Apr 24 15:08:29 ldom1 vdc: [ID 625787 kern.info] vdisk@0 is online using ldc@3,1
Satinder Nijjar is a Principal Product Manager for Oracle Virtual Networking products and has 18 years of IT experience in industries ranging from financial services, retail, and healthcare to education. Satinder joined Oracle in 2012 as part of the Xsigo Systems acquisition.
|Revision 1.0, 09/16/2013|