How to Build a Better Data Center with Oracle Virtual Networking

Deploying Oracle VM Server for SPARC

by Satinder Nijjar

This article demonstrates how to provision Oracle Virtual Networking virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs) onto Oracle VM Server for SPARC. Then it shows how to create network, Private Virtual Interconnect (PVI), and storage "clouds" using Oracle Fabric Manager, as well as how to create a server pool and deploy a guest logical domain.


Published September 2014


Want to comment on this article? Post the link on Facebook's OTN Garage page.  Have a similar article to share? Bring it up on Facebook or Twitter and let's discuss.
Table of Contents
Introduction
Prerequisites
Install the Oracle Virtual Networking Host Driver
Create Clouds and an I/O Template in Oracle Fabric Manager
Create a Server Pool and Deploy a Guest LDom
Appendix: Uploading the Oracle Virtual Networking Host Driver to Oracle Enterprise Manager Ops Center 12c
See Also
About the Author

Introduction

Oracle Virtual Networking virtualizes data center infrastructure and enables connections from any virtual machine (VM) or server to other VMs, servers, network resources, and storage devices. In this article, you will learn how to install the Oracle Virtual Networking host drivers on Oracle VM Server for SPARC and create network, Private Virtual Interconnect (PVI), and storage "clouds" using Oracle Fabric Manager. We will leverage Oracle Enterprise Manager Ops Center 12c Release 2 and the built-in Oracle Solaris multipathing (MPxIO) capability.

Note: This article assumes you have a basic understanding of Oracle Virtual Networking and its components as well as a working knowledge for Fibre Channel (FC) switching and storage. In addition, it assumes you are familiar with Oracle Enterprise Manager Ops Center and Oracle VM Server for SPARC. If you need information on any of these technologies, refer to the resources listed in the "See Also" section at the end of this article.

Figure 1 through Figure 3 show the rack configuration and the cabling diagrams for the example configuration documented in this article.

Figure 1. Example rack configuration

Figure 1. Example rack configuration

Figure 2. Example InfiniBand cabling diagram

Figure 2. Example InfiniBand cabling diagram

Figure 3. Example Ethernet and Fibre Channel cabling diagram

Figure 3. Example Ethernet and Fibre Channel cabling diagram

Prerequisites

Note: For more information about what devices and software are compatible with Oracle Virtual Networking, see "Oracle Virtual Networking - Compatibility."

Ensure you have the required hardware listed in Table 1.

Table 1. Required and optional hardware
Required and Optional Hardware Quantity Additional Recommendations/Notes
(Required) Either Oracle Fabric Interconnect F1-15 or Oracle Fabric Interconnect F1-4 2 Recommended firmware: Oracle's Xsigo firmware, which is called XgOS, version 3.9.2 or higher.

I/O cards required per Oracle Fabric Interconnect:
- 1 x quad-port 10 gigabit Ethernet (GbE)
- 1 x dual-port 8 GB FC
(Required) SPARC T5-2 server from Oracle 2 Recommended operating system: Oracle Solaris 11.1 SRU 20.5 or higher.

At least one of the following:
- Oracle's Dual Port QDR InfiniBand Adapter M3 (part number 7104074); recommended minimum firmware version: 2.11.1280 or higher
- Oracle's Sun InfiniBand QDR Host Channel Adapter PCIe: low profile (part number X4242A); recommended minimum firmware version: FW25408 v2.11.2010 or higher
(Required) Fibre Channel storage array (SAN) 1 A Sun ZFS Storage 7120 appliance from Oracle is used in the example documented in this article.
(Required) Fibre Channel SAN switch 2 Up-to-date firmware from the switch vendor; NPIV enabled.
(Optional) Sun Datacenter InfiniBand Switch 36 from Oracle 2 For expansion beyond 18 hosts (two HCAs per host), you can deploy an InfiniBand (IB) spine and leaf topology using this switch; the recommended minimum firmware version is 2.1.4 or higher.
(Optional) Oracle Switch ES1-24 2 For the purpose of Ethernet connectivity to the network, the example in this article uses two of these switches.

Ensure you have the required software shown in Table 2:

Table 2. Required software
Required Software Version
Oracle Fabric Manager Version 4.2.1 or higher
Oracle Enterprise Manager Ops Center 12c Release 2 Version 12.2.0.2663
Oracle Virtual Networking host driver Version 5.3.5 (or higher) for Oracle Solaris 11.1 on SPARC (64-bit); this will be downloaded and installed later

Referring to Figure 1 through Figure 3 and Table 1 and Table 2, ensure you have completed the following tasks:

  • The Oracle Fabric Interconnects have been installed and their initial configuration has been completed. For details on how to install Oracle Fabric Interconnects refer to "How to Build a Better Data Center with Oracle Virtual Networking: Initial Deployment of Oracle Fabric Interconnect."

    This article assumes that the two SPARC T5-2 servers are already connected hosts via HCAs to the Oracle Virtual Networking InfiniBand fabric. Please refer to the Oracle SDN 1.0.0 Quick Start Guide for more information.

  • Oracle Fabric Manager has been installed and its initial configuration has been completed. For details on how to install Oracle Fabric Manager, see "How to Build a Better Data Center with Oracle Virtual Networking: Initial Deployment and Configuration of Oracle Fabric Manager."

    Note: For the purposes of this article, Oracle Fabric Manger is installed on Oracle Linux. It is assumed that any Ethernet configuration has already been completed, including setting any Ethernet ports to Trunk mode or creating link aggregation groups (LAGs). This article assumes you have connected only one of the two Fibre Channel ports per Oracle Fabric Interconnect to two independent Fibre Channel switching fabrics.

  • Oracle VM Server for SPARC has been deployed on the control domain (CDom) of the SPARC T5-2 servers, via Oracle Enterprise Manager Ops Center 12c Release 2, and the servers have been deployed, via Oracle Enterprise Manager Ops Center, over net0. Alternatively, it is possible to have management done over a link aggregate group (LAG), but that is beyond the scope of this article.

    This article assumes you are familiar with Oracle Enterprise Manager Ops Center 12c Release 2 and Oracle VM Server for SPARC. For more information, see the Oracle Enterprise Manager Ops Center documentation and these Oracle VM Server for SPARC technical white papers.

  • The Fibre Channel storage array (SAN) has been configured and connected via Fibre Channel switches to each Oracle Fabric Interconnect via the Fibre Channel I/O modules. A storage admin will need to zone any LUNs to the hosts. Fibre Channel zoning and storage LUN creation and presentation are beyond the scope of this document.
  • The other cabling shown in Figure 2 and Figure 3 has been performed.
  • If desired, an IB spine and leaf topology has been deployed using the Sun Datacenter InfiniBand Switch 36 switches.

Install the Oracle Virtual Networking Host Driver

In this section, we will install the Oracle Virtual Networking host driver, which you downloaded earlier, onto the SPARC T5-2 servers, which are connected to the Oracle Fabric Interconnects and onto which you deployed Oracle VM for Server for SPARC. Before installing an Oracle Virtual Networking host driver, please review the relevant documentation. The release notes for host drivers are located on the Oracle Virtual Networking Documentation page.

To install a host driver on each server, perform the following steps.

  1. Download onto the server the latest Oracle Virtual Networking host driver for Oracle Solaris 11.1 on SPARC (64-bit)—for example, version 5.3.5—from My Oracle Support. It can be found under the Patches and Updates tab by searching based on "Product or Family," where Product is "Oracle Fabric Interconnect" and "Release" is "Solaris Drivers 5.5.0" or higher or "Product or Family" is "Oracle Virtual Networking" and "Release" is "Oracle Virtual Networking Drivers 5.3.X" or higher. Select the Include all products in a family checkbox.

    Note: Rather than downloading the host driver to each host and then installing it, as described in this procedure, when hosts are deployed via Oracle Enterprise Manager Ops Center, you can upload a host driver to Oracle Enterprise Manager Ops Center, then add the host driver to the Oracle Solaris 11 Software Update Library, and then install the driver on each host without needing to download the driver to each host. If you would prefer to do that, use the instructions in the appendix instead of the performing the remaining steps of this procedure.

  2. Log in to the server as root.
  3. Copy the downloaded driver file locally onto the server. (For the purposes of this procedure, it is assumed that the driver will be copied to /usr.)
  4. In the directory where you copied the file, untar the driver file by using the tar xvzf command, for example:

    tar xvzf ORCLovn-5.2.1-SL-sparcv.tgz
    
  5. Set up the publisher by using the pkg set-publisher command and specifying the path to the directory in which the host driver file resides, for example:

    pkg set-publisher -p /usr/ORCLovn
    
  6. Install the host driver by using the pkg install command and specifying the host driver file name, for example:

    pkg install ORCLovn-drv
    
  7. (Optional) Unset the publisher by using the pkg unset-publisher command and specifying the directory where the host driver file is located, for example:

    pkg unset-publisher /usr/ORCLovn
    
  8. After installing the host driver, the xsadmd service sometimes is set to a disabled state. After the driver is installed but before rebooting the server, issue the following commands to check the state of the service and re-enable it if it is disabled:

    svccfg -s application/xsadmd:default setprop general/enabled = true
    svccfg -s application/xsadmd:default refresh
    
  9. Allow the previous commands to complete, and then reboot the server to load the driver into memory, for example:

    shutdown -y -g0 -i6
    
  10. After the reboot, verify that the host driver is installed by doing any of the following:

    • Issue the pkg list command with grep to search for ORCLovn-drv (part of the driver file name).
    • Issue the svcs xsadmd command. If the xsadmd service is present and online, the Oracle Virtual Networking host driver is installed.
    • Issue the modinfo command with grep to search for xs to see the modules that were installed.

Create Clouds and an I/O Template in Oracle Fabric Manager

In this section, we will create network, Private Virtual Interconnect (PVI), and storage clouds, and then we will create an I/O Template using the created clouds. We will present two vNICs per network cloud and two vHBAs for the storage cloud, one each from each Oracle Fabric Interconnect. We will also present two PVI vNICs per PVI cloud, because the SPARC T5-2 servers have a minimum of two active HCA ports each and the PVI vNICs will terminate onto a separate HCA port.

Log in to Oracle Fabric Manager

  1. Open a browser and go to https://<hostname>:8443/xms/Login.jsf, where <hostname> is the IP address or fully qualified domain name (FQDN) of the host on which you installed Oracle Fabric Manager.
  2. Log in as the root user.

    Figure 4. Logging in to Oracle Fabric Manager

    Figure 4. Logging in to Oracle Fabric Manager

Create the Network Clouds

Using Oracle Fabric Manager, we will create two network clouds: one called LDom_mgmt_10G for logical domain (LDom) management on vLAN 1, and one called LDom_net_10G, which is a vLAN trunk network for all other LDom network requirements.

  1. To create a network cloud, select Network Clouds from the Navigation panel.

    Figure 5. Selecting the Network Clouds item

    Figure 5. Selecting the Network Clouds item

  2. Click Add a New Network Cloud icon in the Network Cloud Summary window.

    Figure 6. Selecting the Add a New Network Cloud icon

    Figure 6. Selecting the Add a New Network Cloud icon

  3. Name the network cloud LDom_mgmt_10G, and then select the appropriate Ethernet ports or LAGs (taking into consideration any vLAN requirements, for example, either Trunk or Access mode) from each of the Oracle Fabric Interconnects. Then click Submit. (For this configuration vLAN 1 is the management vLAN.)
  4. Click Advanced Configuration, set Access VLAN ID to 1, and then click Submit.

    Figure 7. Specifying advanced configuration information for the LDom_mgmt_10G cloud

    Figure 7. Specifying advanced configuration information for the LDom_mgmt_10G cloud

  5. Repeat Step 1 through Step 3 to create the LDom_net_10G cloud. (For the purposes of this article, both LDom_mgmt_10G and LDom_net_10G reside on the same underlying I/O ports, so you can choose to enable additional 1/10G I/O ports and terminate these clouds on different I/O ports.)
  6. Click Advanced Configuration, select Trunk Mode, and then click Submit.

    Figure 8. Specifying advanced configuration information for the LDom_net_10G cloud

    Figure 8. Specifying advanced configuration information for the LDom_net_10G cloud

Create the PVI Clouds

Using Oracle Fabric Manager, we will create two PVI clouds: LDom_net_PVI and LDom_LiveMigration_PVI.

  1. To create the PVI clouds, select PVI Clouds from the Navigation panel.

    Figure 9. Selecting the PVI Clouds item

    Figure 9. Selecting the PVI Clouds item

  2. Click Add a New PVI Cloud icon in the PVI Cloud Summary window.

    Figure 10. Selecting the Add a New PVI Cloud icon

    Figure 10. Selecting the Add a New PVI Cloud icon

  3. Name the PVI cloud LDom_net_PVI, select the fabric, set MTU to 9000, and click Submit.

    Figure 11. Adding the LDom_net_PVI cloud

    Figure 11. Adding the LDom_net_PVI cloud

  4. Repeat Step 1 through Step 3 to create the second PVI cloud, except name it LDom_LiveMigration_PVI. As before, set MTU to 9000.

    Figure 12 shows a summary of the two PVI clouds after they are created.

    Figure 12. Summary of the two PVI clouds

    Figure 12. Summary of the two PVI clouds

Create the Storage Cloud

Using Oracle Fabric Manager, we will create one storage cloud. For the example shown in this article, the storage array used is a Sun ZFS Storage 7120 appliance, which is connected to two FC switches. One port from one Oracle Fabric Interconnect module is connected to one FC switch, and one port from the other Oracle Fabric Interconnect FC module is connected to the other FC switch.

As in a typical production environment, in this example, we have two independent FC fabrics. However, in this example, we do not have a redundant storage array. In a real production environment, you would typically have a dual-head redundant storage array.

  1. To create a storage cloud, select Storage Clouds from the Navigation panel.

    Figure 13. Selecting the Storage Clouds item

    Figure 13. Selecting the Storage Clouds item

  2. Click Add a Storage Cloud icon in the Storage Cloud Summary window.

    Figure 14. Selecting the Add a Storage Cloud icon

    Figure 14. Selecting the Add a Storage Cloud icon

  3. Name the storage cloud LDom_FC, select the FC ports, and click Submit.

    Figure 15. Creating the storage cloud

    Figure 15. Creating the storage cloud

Create an I/O Template for Oracle VM Server for SPARC

For Oracle VM Server for SPARC, we will create high availability (HA) vNICs and HA vHBAs within Oracle Fabric Manager, and because the operating system is Oracle VM, Oracle Fabric Manager will simply create two separate vNICs and vHBAs. Then, within Oracle Enterprise Manager Ops Center, we create an IP network multipathing (IPMP) link with the vNICs, and MPXIO with manage the multiple paths to any storage LUN that is presented.

With Oracle Virtual Networking, we can name our vNICs and vHBAs as long as we follow the limitations specified in the "System Limitations and Restrictions" section of the Oracle Virtual Networking Host Drivers for Oracle Solaris 11.1 Release Notes," which states that with Oracle Solaris systems, the names of virtual resources are restricted to the following lengths:

  • vNICs: 10 characters
  • vHBAs: 15 characters
  • Server profiles: 31 characters

We will create the following vNICs and vHBAs:

  • Management, default VLAN 1, ldommgmt1 and ldommgmt1B
  • Live Migration, PVI, ldomlm1 and ldomlm1B
  • LDom PVI Network, PVI, ldompvi1 and ldompvi1B
  • LDom Network, untagged Trunk Ports, ldomnet1 and ldomnet1B
  • Storage, Fibre Channel, cdomhba1 and cdomhba1B
  1. In Oracle Fabric Manager, select I/O Templates in the Navigation panel.

    Figure 16. Selecting the I/O Templates item

    Figure 16. Selecting the I/O Templates item

  2. Click Create an I/O Template icon in the I/O Template Summary window.

    Figure 17. Selecting the Create an I/O Template icon

    Figure 17. Selecting the Create an I/O Template icon

  3. In the I/O Template Editor window, name the template LDOM_Template, click the dual purple Add an HA vNIC to the template icon four times, and then click the dual green Add an HA vHBA to the template icon once.

    Figure 18. I/O Template Editor

    Figure 18. I/O Template Editor

  4. Now select each vNIC and vHBA by double-clicking it, and then edit each vNIC and vHBA, using the naming convention described before this procedure, and terminate onto the appropriate network, PVI, and storage clouds.

    You need to create only ldommgmt1, ldomlm1, ldompvi1, ldomnet1, and cdomhba1. Oracle Fabric Manager will create the corresponding <name>1B vNICs and vHBAs, which are the second vNIC or vHBA in each pair.

    Figure 19. Creating the vNICs and vHBAs

    Figure 19. Creating the vNICs and vHBAs

  5. Save the template by clicking Save.

    Figure 20. Saving the template

    Figure 20. Saving the template

  6. Review your template:

    1. In the I/O Template Summary window, select the recently created template.
    2. Click Edit in the General tab, deselect the Apply Template Name option, and click Submit.

      Oracle Fabric Manager will create server profiles on the Oracle Fabric Interconnects using the host name instead of the template name plus a randomly generated number. The caveat for this is that you must have updated the host name on the hosts. When binding I/O templates to hosts, Oracle Fabric Manager will then use the host name to create the profile.

      Figure 21. Deselecting the Apply Template Name option

      Figure 21. Deselecting the Apply Template Name option

    3. Click the vNICs tab to verify that the template was created correctly with the appropriate vNICs.

      Figure 22. Verifying that the template was created correctly with the appropriate vNICs

      Figure 22. Verifying that the template was created correctly with the appropriate vNICs

    4. Click the vHBAs tab to verify that the template was created correctly with the appropriate vHBA.

      Figure 23. Verifying that the template was created with the appropriate vHBA

      Figure 23. Verifying that the template was created with the appropriate vHBA

Apply the I/O Template to the SPARC Servers

  1. In Oracle Fabric Manager, select Physical Servers in the Navigation panel.

    Figure 24. Selecting the Physical Servers item

    Figure 24. Selecting the Physical Servers item

    You will see any InfiniBand-attached servers with Oracle Virtual Networking drivers installed.

    Figure 25. Summary of physical servers

    Figure 25. Summary of physical servers

  2. Select the servers and click Assign an I/O Template icon.

    Figure 26. Selecting the Assign an I/O Template to the Selected Server icon

    Figure 26. Selecting the "Assign an I/O template to the selected server" icon

  3. Choose the appropriate template in the "Choose a template to assign" window and click Submit.

    Figure 27. Choosing the appropriate template

    Figure 27. Choosing the appropriate template

  4. Click Yes in the confirmation dialog box.

    Figure 28. Confirming that you want to apply the template

    Figure 28. Confirming that you want to apply the template

  5. In the Recent Jobs Summary window, check the state of the binding of the I/O profile to the physical server job. It might take a short while, but when the job is complete, the State column will show Complete.

    Figure 29. Checking that the I/O profile is bound to the server

    Figure 29. Checking that the I/O profile is bound to the server

    Note: If you are on the control domain (CDom) console while applying the I/O Template, you will see "NOTICE" messages regarding the Oracle Virtual Networking devices.

  6. When the ApplyIOTemplate job shows Complete, select a server and confirm that the virtual I/O has been provisioned as expected. Check both the vNICs and the vHBA.

    Figure 30. Checking the vNICs

    Figure 30. Checking the vNICs

    Figure 31. Checking the vHBA

    Figure 31. Checking the vHBA

  7. Verify that the rest of the servers have been deployed as expected.

Verify the Oracle Virtual Networking vNICs Deployment

Log on to each SPARC T5-2 server and verify that the Oracle Virtual Networking vNICs are deployed.

Then, using the commands shown in Figure 32 through Figure 35 on the CDom console, map the Oracle Virtual Networking vNIC names to Oracle Solaris network names. Refer to Table 3.

The command shown in Figure 32 and Figure 33 will display a list of all Oracle Virtual Networking vNICs and associate each with an instance number for the xsvnic driver ("xsvnic" is the name of the Oracle Virtual Networking driver as seen by Oracle Solaris).

The command shown in Figure 34 and Figure 35 will display all the Oracle Virtual Networking vNICs using the xsvnic instance numbers; you can then associate these with Oracle Solaris network names.

Figure 32. Associating the vNICs on the first server with an xsvnic instance number

Figure 32. Associating the vNICs on the first server with an xsvnic instance number

Figure 33. Associating the vNICs on the second server with an xsvnic instance number

Figure 33. Associating the vNICs on the second server with an xsvnic instance number

Figure 34. Associating the xsvnic numbers on the first server to Oracle Solaris network names

Figure 34. Associating the xsvnic numbers on the first server to Oracle Solaris network names

Figure 35. Associating the xsvnic numbers on the second server to Oracle Solaris network names

Figure 35. Associating the xsvnic numbers on the second server to Oracle Solaris network names

Table 3. Mapping the Oracle Virtual Networking vNIC names to Oracle Solaris network names
System OVNT5-2A System OVNT5-2B
Oracle Virtual Networking vNIC Name xsvnic Name Oracle Solaris Network Name Oracle Virtual Networking vNIC Name xsvnic Name Oracle Solaris Network Name
ldommgmt1 xsvnic6 net26 ldommgmt1 xsvnic6 net26
ldommgmt1B xsvnic2 net22 ldommgmt1B xsvnic2 net30
ldomnet1 xsvnic7 net16 ldomnet1 xsvnic7 net16
ldomnet1B xsvnic3 net24 ldomnet1B xsvnic3 net29
ldomlm1 xsvnic4 net28 ldomlm1 xsvnic4 net28
ldomlm1B xsvnic0 net20 ldomlm1B xsvnic0 net21
ldompvi1 xsvnic5 net27 ldompvi1 xsvnic5 net27
ldompvi1B xsvnic1 net21 ldompvi1B xsvnic1 net31

In Table 3, notice that the Oracle Solaris network names for ldommgmt1B, ldomnet1B, ldomlm1B, and ldompvi1B on system OVNT5-2B are inconsistent with the names of their counterparts on system OVNT5-2A. The next section will explain how to rectify this.

Optional Step to Make Oracle Solaris Network Names Consistent

If you perform this step, it needs to be performed before any other commands are executed or any IP address assignment is done. Refer to "The dladm Command" for more details.

Use the dladm rename-link old-linkname new-linkname command shown in Figure 36 to make the Oracle Solaris network names consistent on system OVNT5-2B. Table 4 shows the names after they have been made consistent.

Figure 36. Renaming four Oracle Solaris network names

Figure 36. Renaming four Oracle Solaris network names

Table 4. Updated Oracle Solaris network names
System OVNT5-2A System OVNT5-2B
Oracle Virtual Networking vNIC Name xsvnic Name Oracle Solaris Network Name Oracle Virtual Networking vNIC Name xsvnic Name Oracle Solaris Network Name
ldommgmt1 xsvnic6 net26 ldommgmt1 xsvnic6 net26
ldommgmt1B xsvnic2 net22 ldommgmt1B xsvnic2 net22
ldomnet1 xsvnic7 net16 ldomnet1 xsvnic7 net16
ldomnet1B xsvnic3 net24 ldomnet1B xsvnic3 net24
ldomlm1 xsvnic4 net28 ldomlm1 xsvnic4 net28
ldomlm1B xsvnic0 net20 ldomlm1B xsvnic0 net20
ldompvi1 xsvnic5 net27 ldompvi1 xsvnic5 net27
ldompvi1B xsvnic1 net21 ldompvi1B xsvnic1 net21

Verify the Oracle Virtual Networking vHBA and Storage Deployment

Using the information gathered in Step 7 and Step 8 of the "Apply the I/O Template to the SPARC Servers" section, your SAN administrator will now be able to zone in storage to each host. In the following example, we zone in two shared LUNs: one is 30 GB and one is 100 GB.

Note: Zoning and LUN mapping are beyond the scope of this document. Please refer to the documentation from your Fibre Channel and storage vendor for information about zoning or LUN presentation.

Log on to each SPARC T5-2 server and verify the Oracle Virtual Networking storage using the following procedure.

  1. Use the commands shown in Figure 37 and Figure 38 on the CDom to verify that the Oracle Virtual Networking vHBA has been presented.

    Figure 37. Verifying the vHBA on the OVNT5-2A system

    Figure 37. Verifying the vHBA on the OVNT5-2A system

    Figure 38. Verifying the vHBA on the OVNT5-2B system

    Figure 38. Verifying the vHBA on the OVNT5-2B system

  2. Once the SAN administrator has zoned in storage, reboot the hosts.
  3. Use the format and mpathadm list lu commands on the CDom to verify the disk size and the number of paths per LUN, thus confirming that MPXIO is operational.

    Figure 39. Confirming that MPXIO is operational on the OVNT5-2A system

    Figure 39. Confirming that MPXIO is operational on the OVNT5-2A system

    Figure 40. Confirming that MPXIO is operational on the OVNT5-2B system

    Figure 40. Confirming that MPXIO is operational on the OVNT5-2B system

Create a Server Pool and Deploy a Guest LDom

In the following subsections, we will use Oracle Enterprise Manager Ops Center to complete all the necessary steps to create a server pool and deploy a guest logical domain (LDom).

Note: This article assumes you have a working installation of Oracle Enterprise Manager Ops Center 12c and that the SPARC T5-2 servers were installed and are managed by Oracle Enterprise Manager Ops Center. Refer to the Enterprise Manager Ops Center documentation for deployment "how to" information.

Configure the CDom

In this section, we will complete the host (CDom) configuration. Using the Oracle Virtual Networking vNICs created and presented in an earlier section, we will create DLMP link aggregates that will be utilized by any guest LDoms.

Create DLMP Link Aggregates

We will create the following DLMP aggregates:

  • Ldommgmt1, using net26 and net22
  • Ldomnet1, using net16 and net24
  • Ldomlm1, using net28 and net20
  • Ldompvi1, using net27 and net21
  1. From within Oracle Enterprise Manager Ops Center, select one of the SPARC T5-2 servers. Select the Networks tab and then select the Link Aggregations subtab.
  2. Select Create Link Aggregation icon.

    Figure 41. Preparing to create a link aggregation

    Figure 41. Preparing to create a link aggregation

  3. Referring back to Table 4 for the Oracle Solaris network names, create a link aggregation for ldommgmt1, and then click Next.

    Figure 42. Creating a link aggregation for ldommgmt1

    Figure 42. Creating a link aggregation for ldommgmt1

  4. On the next screen, set LACP Mode to Off, and then click Next.

    Figure 43. Setting the LACP mode

    Figure 43. Setting the LACP mode

  5. Click Finish.

    Figure 44. Summary of the link aggregation

    Figure 44. Summary of the link aggregation

  6. Repeat Step 1 through Step 5 for the remaining DLMP link aggregates.
  7. Log on to the CDom and use the dladm command to verify that the aggregates have been created.

    Figure 45. Verifying the aggregates

    Figure 45. Verifying the aggregates

  8. Display the aggregates.

    Figure 46. Displaying the aggregates

    Figure 46. Displaying the aggregates

  9. Modify all the aggregates and set them to DLMP mode, using the dladm modify-aggr -m dlmp <aggregate_name> command.

    Note: You might need to elevate your access permissions.

  10. Display the aggregates after modifying them.

    Figure 47. Displaying the aggregates after modifying them

    Figure 47. Displaying the aggregates after modifying them

  11. Confirm that Oracle Enterprise Manager Ops Center has been updated to reflect the change to DLMP mode. It can sometimes take a while for this to be updated.

    Figure 48. Verifying that Oracle Enterprise Manager Ops Center has been updated

    Figure 48. Verifying that Oracle Enterprise Manager Ops Center has been updated

  12. Repeat the process on the remaining CDom.

Confirm the Storage

  1. From within Oracle Enterprise Manager Ops Center, select one of the CDoms. Then select the Storage tab.
  2. Verify that the LUNs are shown.

    Figure 49. Verifying that Oracle Enterprise Manager Ops Center displays the LUNs

    Figure 49. Verifying that Oracle Enterprise Manager Ops Center displays the LUNs

  3. Repeat the previous steps on the other CDom and confirm that the LUN GUIDs are the same as those shown for the first CDom.

Create and Modify Networks for Use by the Server Pool

In this section, we will create networks using the Oracle Virtual Networking vNICs that were presented to the CDoms earlier.

Create Additional Networks and Fabric

Before we can create a server pool within Oracle Enterprise Manager Ops Center, we need to define networks.

  1. From within Oracle Enterprise Manager Ops Center, under the Navigation menu select Networks.

    In the following example, because I had imported a configured ZFS Storage 7120 appliance, Oracle Enterprise Manager Ops Center has already created some networks automatically.

    Figure 50. Selecting the Network item

    Figure 50. Selecting the Network item

  2. From the Actions/Operations menu on the right side, click Define Network.

    Figure 51. Selecting the Define Network item

    Figure 51. Selecting the Define Network item

  3. The first network we will create is the LDom Live Migration network. Enter appropriate values, and select the Create New Untagged Fabric checkbox.

    Figure 52. Creating the LDom Live Migration network

    Figure 52. Creating the LDom Live Migration network

  4. In the Specify Managed Address Ranges section, leave the defaults.
  5. In the Specify Static Routes section, leave the defaults.
  6. In the Specify Network Services section, leave the defaults.
  7. In the Assign Network section, select the appropriate hosts.

    Figure 53. Selecting the hosts

    Figure 53. Selecting the hosts

  8. In the Configure Interfaces section, specify the appropriate "NIC" (in this case, it will be the DLMP aggregate that we created earlier). At this stage, do not specify an IP address for this interface.

    Figure 54. Specifying the DLMP aggregate

    Figure 54. Specifying the DLMP aggregate

  9. Associate this new network with an Oracle Enterprise Manager Ops Center Proxy Controller, if required.

    Figure 55. Associating the network with an Oracle Enterprise Manager Ops Center Proxy Controller

    Figure 55. Associating the network with an Oracle Enterprise Manager Ops Center Proxy Controller

  10. Review the information in the Summary section, and then click Finish.

    Figure 56. Summary of the new network

    Figure 56. Summary of the new network

  11. Select the newly created network.

    Figure 57. Selecting the new network

    Figure 57. Selecting the new network

  12. From the Actions panel, click Assign Network.

    Figure 58. Selecting the Assign Network item

    Figure 58. Selecting the Assign Network item

  13. In the Select Server Pools and/or Assets section, select the recently configured hosts. Select the Use for Migration checkbox if this network is to be used for live migration.

    Figure 59. Selecting the recently configured hosts

    Figure 59. Selecting the recently configured hosts

  14. Configure the IP addresses in the Configure Interfaces section.

    Figure 60. Configuring IP addresses

    Figure 60. Configuring IP addresses

  15. Review the information in the Summary section, and then click Finish.

    Figure 61. Summary of the IP address information

    Figure 61. Summary of the IP address information

  16. Once Oracle Enterprise Manager Ops Center job has completed, confirm that the hosts have been added to the recently create network and the hosts' IP addresses are configured.

    Figure 62. Confirming the hosts have been added

    Figure 62. Confirming the hosts have been added

    Figure 63. Confirming the IP addresses have been configured

    Figure 63. Confirming the IP addresses have been configured

  17. Repeat the steps in this procedure for any remaining networks.

Create Static Block Storage

Before we can create a server pool within Oracle Enterprise Manager Ops Center, we need to define storage. In the example documented here, we will be using Fibre Channel block storage. You can just as easily use iSCSI or NFS.

  1. From the Navigation panel, select Libraries, and then select Static Block Storage.

    Figure 64. Selecting the Static Block Storage item

    Figure 64. Selecting the Static Block Storage item

  2. From the Actions panel, select New SAN Storage Library.

    Figure 65. Selecting New SAN Storage Library

    Figure 65. Selecting New SAN Storage Library

  3. Give the library a name.

    Figure 66. Specifying a name for the library

    Figure 66. Specifying a name for the library

  4. In the Identify Library Associations section, select the host(s) where you presented storage earlier.

    Figure 67. Selecting the hosts

    Figure 67. Selecting the hosts

  5. In the Identify LUNs section, select the appropriate LUNs.

    Figure 68. Selecting the appropriate LUNs

    Figure 68. Selecting the appropriate LUNs

  6. Review the information in the Summary section, and then click Finish.

    Figure 69. Reviewing the summary information

    Figure 69. Reviewing the summary information

  7. Repeat the steps in this procedure for any remaining SAN storage.

Create a Server Pool

In this section, we will create a server pool using all the previously configured hosts, networks, and libraries.

  1. From the Navigation panel, select Assets, and then from the drop-down menu, select Server Pools.

    Figure 70. Selecting the Server Pools item

    Figure 70. Selecting the Server Pools item

  2. From the Actions panel, select Create Server Pool.

    Figure 71. Selecting Create Server Pool

    Figure 71. Selecting Create Server Pool

  3. Give the server pool a name, and select the appropriate virtualization technology.

    Figure 72. Specifying a name for the server pool

    Figure 72. Specifying a name for the server pool

  4. Select the server pool members.

    Figure 73. Selecting the server pool members

    Figure 73. Selecting the server pool members

  5. Select the network domain.

    Figure 74. Selecting the network domain

    Figure 74. Selecting the network domain

  6. Associate the network with the servers in the server pool, and select the appropriate migration network.

    Figure 75. Associating the network with the servers in the pool

    Figure 75. Associating the network with the servers in the pool

  7. In the Configure Interfaces section, specify any network connection configuration changes that are required for your setup.

    Figure 76. Specifying configuration settings for the network connection

    Figure 76. Specifying configuration settings for the network connection

  8. Associate the libraries with the server pool.

    Figure 77. Associating the libraries with the server pool

    Figure 77. Associating the libraries with the server pool

  9. Make any policy changes that are appropriate for your setup.

    Figure 78. Specifying policies

    Figure 78. Specifying policies

  10. Review the information in the Summary section, and then click Finish.

    Figure 79. Reviewing the summary information

    Figure 79. Reviewing the summary information

Verify the Guest LDoms

Your Oracle VM Server for SPARC setup on Oracle Virtual Networking is now ready for use.

Note: LDom provisioning is beyond the scope of this article. Please refer to "Configuring and Installing Guest Domains" in the Oracle Enterprise Manager Ops Center documentation for more information.

In this section, we will run a simple connectivity (ping) test from the guest LDom to another guest LDom over an Oracle Virtual Networking PVI vNIC, and from a guest LDom to an NFS share that is over the Oracle Virtual Networking 10G vNIC.

In the next few figures, you can see that ovn-ldom01 through ovn-ldom05 have Oracle Solaris vnets that are dependent on the aggregates that we created earlier that are on Oracle Virtual Networking vNICs.

Figure 80. Screen showing the correlation between LDom and CDom Aggregate #1

Figure 80. Screen showing the correlation between LDom and CDom Aggregate #1

Figure 81. Screen showing the correlation between LDom and CDom Aggregate #2

Figure 81. Screen showing the correlation between LDom and CDom Aggregate #2

Figure 82. Screen showing the correlation between LDom and CDom Aggregate #3

Figure 82. Screen showing the correlation between LDom and CDom Aggregate #3

Figure 83. Screen showing the correlation between LDom and CDom Aggregate #4

Figure 83. Screen showing the correlation between LDom and CDom Aggregate #4

  1. Log on to two of the guest LDoms on different hosts. The following example shows logging on to ovn-ldom01 and ovn-ldom02.
  2. Confirm that the IP addresses and MAC addresses are correct. Run ifconfig -a4 and confirm the network configuration.

    Figure 84. Confirming the network configuration on ovn-ldom01

    Figure 84. Confirming the network configuration on ovn-ldom01

    Figure 85. Confirming the network configuration on ovn-ldom02

    Figure 85. Confirming the network configuration on ovn-ldom02

  3. From ovn-ldom01, ping ovn-ldom02 over the ldompvi1 network.

    Figure 86. Pinging ovn-ldom02

    Figure 86. Pinging ovn-ldom02

  4. From ovn-ldom01, ping an external host over the ldomnet1 network. The following example shows pinging the external NFS share and also using showmount to display available shares on the external NFS share.

    Figure 87. Pinging an external host and displaying available NFS shares

    Figure 87. Pinging an external host and displaying available NFS shares

Appendix: Uploading the Oracle Virtual Networking Host Driver to Oracle Enterprise Manager Ops Center 12c

The following procedure explains how to upload an Oracle Virtual Networking host driver to Oracle Enterprise Manager Ops Center and then add it to the Oracle Solaris 11 Software Update Library.

It is assumed that you have already downloaded the Oracle Virtual Networking host driver, as described in Step 1 of the "Install the Oracle Virtual Networking Host Driver" section.

  1. Copy the host driver locally to the Oracle Enterprise Manager Ops Center server or to an HTTP location accessible from the server. In this example, it will be copied locally (/root) to the server.
  2. Unzip and untar the downloaded driver.

    You will now have an ORCLovn folder.

  3. Log on to the Oracle Enterprise Manager Ops Center management page.
  4. Go to Libraries.
  5. Expand Software Libraries, select Oracle Solaris 11 Software Update Library, then select Content, and then select Add Content from the Actions menu.

    Figure 88. Oracle Solaris 11 Software Update Library

    Figure 88. Oracle Solaris 11 Software Update Library

  6. Specify the location of the unzipped and untarred files. In this example, the path is /root/ORCLovn. Click Next.

    Figure 89. Specifying the location of the files

    Figure 89. Specifying the location of the files

  7. Verify the information shown in the Summary window and then click Finish.

    Figure 90. Verifying the information shown in the Summary window

    Figure 90. Verifying the information shown in the Summary window

Once Oracle Enterprise Manager Ops Center has refreshed its library, you can complete the following steps.

Note: If you are familiar with Oracle Enterprise Manager Ops Center, you could complete the following steps with an OS Update Profile and post-installation script.

  1. Log on to each host and confirm you have an ORCLovn publisher by using the pkg publisher command. You should see results similar to this:

    admin@sca05-ovnt5-2a:~$ pkg publisher
    PUBLISHER  TYPE         STATUS   P   LOCATION
    solaris    origin       online   F   https://oracle-oem-oc-mgmt-ovn-sloc01:8002/IPS/
    opscenter  origin       online   F   https://oracle-oem-oc-mgmt-ovn-sloc01:8002/IPS/
    cacao      origin       online   F   https://oracle-oem-oc-mgmt-ovn-sloc01:8002/IPS/
    mp-re (non-sticky) origin   online  F  https://oracle-oem-oc-mgmt-ovn-sloc01:8002/IPS/
    ORCLovn    origin       online   F   https://oracle-oem-oc-mgmt-ovn-sloc01:8002/IPS/
    admin@sca05-ovnt5-2a:~$
    
  2. As root, install the host drivers by using the pkg install command and specifying the host driver file name, for example:

    pkg install ORCLovn-drv
    
  3. After installing the host driver, the xsadmd service sometimes is set to the disabled state. After the drivers are installed, but before rebooting the server, issue the following commands to check the state of xsadmd and re-enable it if it is disabled:

    svccfg -s application/xsadmd:default setprop general/enabled = true
    svccfg -s application/xsadmd:default refresh
    
  4. Allow the previous commands to complete, and then reboot the server to load the drivers into memory, for example:

    shutdown -y -g0 -i6
    
  5. After the reboot, you can verify the host driver is installed by performing any of the following:

    • Issue the pkg list command with grep to search for ORCLovn-drv (part of the driver file name).
    • Issue the svcs xsadmd command. If the xsadmd service is present and online, the Oracle Virtual Networking host driver is installed.
    • Issue the modinfo command with grep to search for xs to see the modules that were installed.

See Also

About the Author

Satinder Nijjar is a principal product manager for Oracle Virtual Networking products and has 18 years of IT experience in industries ranging from financial services, retail, and healthcare to education. Satinder joined Oracle in 2012 as part of the Xsigo Systems acquisition.

Revision 1.0, 09/08/2014

Follow us:
Blog | Facebook | Twitter | YouTube