Configuring Oracle Solaris Cluster with a Sun ZFS Storage Appliance NAS

by Venugopal Navilugon Shreedhar, November 2011


How to deploy Oracle Solaris Cluster with an active/active clustered Sun ZFS Storage Appliance as a NAS device. This configuration can survive the failure of a server node and preserve data integrity through NFS fencing, and it can also survive the failure of one appliance head by migrating resources to the other head.




Introduction

This article describes how to configure Oracle Solaris Cluster 3.3 5/11 with an active/active clustered Sun ZFS Storage Appliance from Oracle to be used as a network-attached storage (NAS) device for NFS fencing. The following storage configuration will be used throughout this article: two pools—one assigned to each cluster head with each head physically connected to two different subnets. Each Oracle Solaris Cluster node is connected to both subnets through different network adapters. With this configuration, during normal operations, each head is active and provides access through its IP address to shares on its associated pool. If one head fails, its resources (that is, network and pool) migrate to the other head, thus preserving access to its shares.

Want technical articles like this one delivered to your inbox?  Subscribe to the Systems Community Newsletter—only technical content for sysadmins and developers.

The Configuring the Sun ZFS Storage Appliance section describes the configuration required on the active/active clustered Sun ZFS Storage Appliance so it works with Oracle Solaris Cluster. This includes configuring the workflow and creating projects and file systems, which will be exported on the cluster.

Then the Configuring Oracle Solaris Cluster for the Sun ZFS Storage Appliance section describes how to configure Oracle Solaris Cluster to work with the Sun ZFS Storage Appliance. This includes installing the necessary client package on the cluster to work with the Sun ZFS Storage Appliance, mounting the file system, and bringing the file system under cluster control by creating storage resource groups and resources.

Note: This article assumes an active/active clustered Sun ZFS Storage Appliance has been set up and is working as desired. For details on the Sun ZFS Storage Appliance, refer to the Oracle Unified Storage Systems product documentation.

The Sun ZFS Storage Appliance can be configured with a command line interface (CLI) or a browser user interface (BUI). Throughout this article, the CLI is used to configure both the Sun ZFS Storage Appliance and Oracle Solaris Cluster. However, in the final section, Checking the Status of the Sun ZFS Storage Appliance, the configuration of the Sun ZFS Storage Appliance is shown in the BUI for reference.

Figure 1 shows an overview of the setup of Oracle Solaris Cluster with the active/active clustered Sun ZFS Storage Appliance. To focus on the configuration of the Sun ZFS Storage Appliance and Oracle Solaris Cluster for enabling the fencing/lock-release support, the network connectivity is shown in a simplified manner, that is, without the redundancy of interfaces on either the Oracle Solaris Cluster nodes' side or the Sun ZFS Storage Appliance heads' side. Additional network interfaces should be used on the cluster nodes and on the Sun ZFS Storage Appliance heads with an IPMP group to provide networking redundancy.

Note: Figure 1 shows an example configuration. The steps described in this document can be applied to other types of servers (x86 or SPARC), as well as to other clustered Sun ZFS Storage Appliance models.

Figure 1

Figure 1. Example Configuration

Configuring the Sun ZFS Storage Appliance

Here are the procedures for configuring the Sun ZFS Storage Appliance.

Checking and Updating the Software Version

Do the following to determine what version of software is installed on the Sun ZFS Storage Appliance and update the software if it is not the latest supported version.

  1. On qualtoro1, run the following command:

    qualtoro1:maintenance system updates> ls
    Updates:
    
    UPDATE                             DATE                   STATUS
    ak-nas@2010.08.17.3.0,1-0.17       2011-4-5 22:17:38      previous
    ak-nas@2010.08.17.4.0,1-1.31       2011-5-25 15:04:11     current
    
  2. On qualtoro2, run the following command:

    qualtoro2:maintenance system updates> ls
    Updates:
    
    UPDATE                             DATE                   STATUS
    ak-nas@2010.08.17.3.0,1-0.17       2011-4-5 22:17:38      previous
    ak-nas@2010.08.17.4.0,1-1.31       2011-5-25 15:04:11     current
    
  3. Refer to this wiki page to get information about the latest supported software.
  4. If the Sun ZFS Storage Appliance does not have the latest supported version of the software, do the following to update the software:
    1. Sign in to My Oracle Support.
    2. Select the Patches & Updates tab.
    3. Search by Sun ZFS Storage Appliance product family or by Patch ID.
    4. Download the zip file to your local system and unzip the file.

Configuring the Workflow for Oracle Solaris Cluster Using the CLI

Use the steps in the following sections to enable full support as a NAS device for Oracle Solaris Cluster.

  1. To execute the workflow on qualtoro1, run the commands shown in Listing 1.

    Listing 1: Executing a Worklfow on qualtoro1
    qualtoro1:maintenance workflows> ls
    Properties:
                      showhidden = false
    
    Workflows:
    
    WORKFLOW                       NAME                                OWNER  SETID   ORIGIN
    workflow-000   Configure for Oracle Solaris Cluster NFS            root   false   Oracle Corporation
    workflow-001   Unconfigure Oracle Solaris Cluster NFS              root   false   Oracle Corporation
    workflow-002   Configure for Oracle Enterprise Manager Monitoring  root   false   Sun Microsystems, Inc.
    workflow-003  Unconfigure Oracle Enterprise Manager Monitoring     root   false   Sun Microsystems, Inc.
    
    qualtoro1:maintenance workflows> select workflow-000
    qualtoro1:maintenance workflow-000> execute
    qualtoro1:maintenance workflow-000 execute (uncommitted)> set password=password
                    password = ******
    qualtoro1:maintenance workflow-000 execute (uncommitted)> set changePassword=false
                    changePassword = false
    qualtoro1:maintenance workflow-000 execute (uncommitted)> commit
    OSC configuration successfully completed.
    qualtoro1:maintenance workflow-000>
    
  2. To execute the workflow on qualtoro2, run the commands shown in Listing 2.

    Listing 2: Executing a Workflow on qualtoro2
    qualtoro2:maintenance workflows> ls
    Properties:
                            showhidden = false
    
    Workflows:
    
    WORKFLOW                       NAME                                OWNER  SETID  ORIGIN
    workflow-000   Configure for Oracle Solaris Cluster NFS            root   false  Oracle Corporation
    workflow-001   Unconfigure Oracle Solaris Cluster NFS              root   false  Oracle Corporation
    workflow-002   Configure for Oracle Enterprise Manager Monitoring  root   false  Sun Microsystems, Inc.
    workflow-003  Unconfigure Oracle Enterprise Manager Monitoring     root   false  Sun Microsystems, Inc.
    
    qualtoro2:maintenance workflows> select workflow-000
    qualtoro2:maintenance workflow-000> execute
    qualtoro2:maintenance workflow-000 execute (uncommitted)> set password=password
                     password = ******
    qualtoro2:maintenance workflow-000 execute (uncommitted)> set changePassword=false
                     changePassword = false
    qualtoro2:maintenance workflow-000 execute (uncommitted)> commit
    OSC configuration successfully completed.
    qualtoro2:maintenance workflow-000>
    
  3. On qualtoro1, verify that the workflow has been executed by checking that the osc_agent user was added:

    qualtoro1:configuration users> show
    Users:
    
    NAME                                   USERNAME          UID            TYPE
    Oracle Agent                           oracle_agent      2000000001     Loc
    Oracle Solaris Cluster Agent           osc_agent         2000000000     Loc   <===
    Super-User                             root              0              Loc
    
  4. On qualtoro2, verify that the workflow has been executed by checking that the osc_agent user was added:

    qualtoro2:configuration users> show
    Users:
    
    NAME                                   USERNAME          UID            TYPE
    Oracle Agent                           oracle_agent      2000000001     Loc
    Oracle Solaris Cluster Agent           osc_agent         2000000000     Loc   <===
    Super-User                             root              0              Loc
    

Creating a Project and Shares (File Systems) Using the CLI

  1. Create a project and define NFS exceptions for qualtoro1.

    In the following example, a new project is created (test-project-q1) on qualtoro1, and some network exceptions are set for NFS. The exceptions are set to allow access to the shares in that project only to specific clients. Here, the clients are the Oracle Solaris Cluster nodes, referenced by their IP addresses on the same subnet as the head qualtoro1. (See ptop1 and ptop2 in Figure 1.) The clients are referenced by their IP addresses and a prefix of 32 in CIDR notation (10.134.112.69/32 and 10.134.112.70/32, in this example).

    qualtoro1:shares> project test-project-q1
    qualtoro1:shares test-project-q1 (uncommitted)> set  \
    sharenfs="sec=sys,root=@10.134.112.69/32:@10.134.112.70/32,rw=@10.134.112.69/32:@10.134.112.70/32"
    sharenfs = sec=sys,root=@10.134.112.69/32:@10.134.112.70/32,rw=@10.134.112.69/32:@10.134.112.70/32 (uncommitted)
    qualtoro1:shres test-project-q1 (uncommitted)> commit
    
  2. Create a project and define NFS exceptions for qualtoro2.

    In the following example, a new project is created (test-project) on qualtoro2, and some network exceptions are set for NFS. The exceptions are set to allow access to the shares in that project to only specific clients. Here, the clients are the Oracle Solaris Cluster nodes, referenced by their IP addresses on the same subnet as the head qualtoro2. (See ptop1b and ptop2b in the Figure 1.) The clients are referenced by their IP addresses and a prefix of 32 in CIDR notation (10.134.113.69/32 and 10.134.113.70/32, in this example).

    qualtoro2:shares> project test-project
    qualtoro2:shares test-project (uncommitted)> set  \
    sharenfs="sec=sys,root=@10.134.113.69/32:@10.134.113.70/32,rw=@10.134.113.69/32:@10.134.113.70/32"
    sharenfs = sec=sys,root=@10.134.113.69/32:@10.134.113.70/32,rw=@10.134.113.69/32:@10.134.113.70/32 (uncommitted)
    qualtoro2:shres test-project (uncommitted)> commit
    
  3. Create a file system in the project for qualtoro1.

    In the example shown in Listing 3, an NFS file system called nfs-for-test-q1 is created. The NFS exceptions are inherited from the previously created project.

    Listing 3: Creating a File System for qualtoro1
    qualtoro1:shares> select test-project-q1
    qualtoro1:shares test-project-q1> filesystem nfs-for-test-q1
    qualtoro1:shares test-project-q1/nfs-for-test -q1(uncommitted)> commit
    qualtoro1:shares test-project-q1> ls
    Properties:
      aclinherit              =  restricted
      atime                   =  true
      checksum                =  fletcher4
      compression             =  off
      dedup                   =  false
      compressratio           =  100
      copies                  =  1
      creation                =  Thu Jul 28 2011 23:33:53 GMT+0000 (UTC)
      logbias                 =  latency
      mountpoint              =  /export
      quota                   =  0
      readonly                =  false
      recordsize              =  128K
      reservation             =  0
      secondarycache          =  all
      nbmand                  =  false
      sharesmb                =  off
      sharenfs                =  sec=sys,root=@10.134.112.69/32:@10.134.112.70/32,rw=@10.134.112.69/32:@10.134.112.70/32
      snapdir                 =  hidden
      vscan                   =  false
      sharedav                =  off
      shareftp                =  off
      sharesftp               =  off
      sharetftp               =  pool = pool-0
    
      canonical_name          =  pool-0/local/test-project-q1
      default_group           =  other
      default_permissions     =  700
      default_sparse          =  false
      default_user            =  nobody
      default_volblocksize    =  8K
      default_volsize         =  0
      exported                =  true
      nodestroy               =  false
      space_data              =  62K
      space_unused_res        =  0
      space_unused_res_shares =  0
      space_snapshots         =  0
      space_available         =  3.28T
      space_total             =  62K
      origin                  =
    
    Shares:
    
    
    
    Filesystems:
    
    NAME                    SIZE           MOUNTPOINT
    nfs-for-test-q1         31K            /export/nfs-for-test-q1
    
    Children:
                       groups => View per-group usage and manage group
                                         quotas
                       replication => Manage remote replication
                       snapshots => Manage snapshots
                        users => View per-user usage and manage user quotas
    
    qualtoro1:shares test-project-q1>
    
  4. Create a file system in the project for qualtoro2.

    In the example in Listing 4, an NFS file system called nfs-for-test is created. The NFS exceptions are inherited from the previously created project.

    Listing 4: Creating a File System for qualtoro2
    qualtoro2:shares> select test-project
    qualtoro2:shares test-project> filesystem nfs-for-test
    qualtoro2:shares test-project/nfs-for-test (uncommitted)> commit
    qualtoro2:shares test-project> ls
    
    Properties:
      aclinherit              =  restricted
      atime                   =  true
      checksum                =  fletcher4
      compression             =  off
      dedup                   =  false
      compressratio           =  100
      copies                  =  1
      creation                =  Thu Jul 28 2011 23:33:53 GMT+0000 (UTC)
      logbias                 =  latency
      mountpoint              =  /export
      quota                   =  0
      readonly                =  false
      recordsize              =  128K
      reservation             =  0
      secondarycache          =  all
      nbmand                  =  false
      sharesmb                =  off
      sharenfs                =  sec=sys,root=@10.134.113.69/32:@10.134.113.70/32,rw=@10.134.113.69/32:@10.134.113.70/32
      snapdir                 =  hidden
      vscan                   =  false
      sharedav                =  off
      shareftp                =  off
      sharesftp               =  off
      sharetftp               =  pool = pool-1
    
      canonical_name          =  pool-1/local/test-project
      default_group           =  other
      default_permissions     =  700
      default_sparse          =  false
      default_user            =  nobody
      default_volblocksize    =  8K
      default_volsize         =  0
      exported                =  true
      nodestroy               =  false
      space_data              =  62K
      space_unused_res        =  0
      space_unused_res_shares =  0
      space_snapshots         =  0
      space_available         =  3.28T
      space_total             =  62K
      origin                  =
    
    
    Shares:
    
    
    
    Filesystems:
    
    NAME                    SIZE           MOUNTPOINT
    nfs-for-test            31K            /export/nfs-for-test
    
    Children:
                       groups => View per-group usage and manage group
                                         quotas
                       replication => Manage remote replication
                       snapshots => Manage snapshots
                        users => View per-user usage and manage user quotas
    
    qualtoro2:shares test-project>
    
  5. Repeat Steps 1 through 4 as needed. Different projects can be created on either of the heads, since the configuration is active/active. Make sure the IP addresses of the clients are on the same subnet as the head on which you create the projects.

    Note that in the previous examples, the projects and shares will physically reside on the pool pool-1 for qualtoro2 and on pool-0 for qualtoro1, since they were created on the heads qualtoro2 and qualtoro1, respectively.

Configuring Oracle Solaris Cluster for the Sun ZFS Storage Appliance

The following sections show example commands to execute on the Oracle Solaris Cluster nodes to take into account the NFS file systems created in the appliance.

Be aware of the following regarding the example commands:

  • The nodes' IP addresses on subnet 10.134.113.0 are 10.134.113.69 (ptop1b) and 10.134.113.70 (ptop2b), respectively.
  • The nfs-for-test file system that was added in the previous section was created on head qualtoro2 in project test-project on pool-1.
  • The nodes' IP addresses on subnet 10.134.112.0 are 10.134.112.69 (ptop1) and 10.134.112.70 (ptop2), respectively.
  • The nfs-for-test-q1 file system that was added in the previous section was created on head qualtoro1 in project test-project-q1 on pool-0.

Downloading and Installing the Client Package SUNWsczfsnfs on Each Node

Perform the following steps to download the p12736304_120_Generic.zip file and install the SUNWsczfsnfs client package. For reference, here is more information on the client package:

Patch 12736304: SUN ZFS STORAGE APPLIANCE NETWORK FILE SYSTEM PLUG IN FOR ORACLE SOLARIS CLUSTER VERSION 1.0

Also see the "How to Install a Sun ZFS Storage Appliance in a Cluster" section in Chapter 3, "Installing and Maintaining Oracle's Sun ZFS Storage Appliances as NAS Devices in an Oracle Solaris Cluster Environment" of the Network-Attached Storage Device manual for Oracle Solaris Cluster.

  1. Log in to My Oracle Support.
  2. Select the Patches & Updates tab.
  3. In the Patch Search pane, select Product or Family (Advanced Search), and select the Include All Products in a Family checkbox.
  4. In the Product field, type Sun Hardware - Unified Storage.
  5. In the Release field, select the appropriate Sun ZFS Storage Appliance software release (for example, Sun ZFS Storage 7000 Software 2010.Q3.)
  6. Click Search. All available patches appear in the Search Results screen.
  7. In the Patch Name list, select the number that corresponds to the release you want to download.
  8. Click Download to open the File Download dialog box.
  9. Click the link to download the zip package.
  10. Unzip the package.
  11. To install the package, do the following on each node:

    1. Navigate to the directory containing the downloaded package and install the package within the global zone.
    2. Run the following command:

      # cd <location of the downloaded package>
      # pkgadd -d . SUNWsczfsnfs
      

Adding the Sun ZFS Storage Appliance to the Oracle Solaris Cluster Configuration

  1. Execute the following command on one of the cluster nodes, for example, on ptop1 or ptop2.

    In the following example, the head called qualtoro2 is used.

    # /usr/cluster/bin/clnas add -t sun_uss -p userid=osc_agent qualtoro2
    Enter password: <password you set when you executed the workflow>
    
  2. Execute the same command on one of the cluster nodes for the other head.

    In the following example, the head called qualtoro1 is used.

    # /usr/cluster/bin/clnas add -t sun_uss -p userid=osc_agent qualtoro1
    Enter password:  <password you set when you executed the workflow>
    

Adding the Device Properties

  1. Execute the following commands on one of the cluster nodes, for example, on ptop1 or ptop2.

    In the following example, the head called qualtoro2 is used.

    # /usr/cluster/bin/clnas set -p "nodeIPs{ptop1}"=10.134.113.69 qualtoro2
    # /usr/cluster/bin/clnas set -p "nodeIPs{ptop2}"=10.134.113.70 qualtoro2
    
  2. Execute the same command on one of the cluster nodes for the other head.

    In the following example, the head called qualtoro1 is used.

    # /usr/cluster/bin/clnas set -p "nodeIPs{ptop1}"=10.134.112.69 qualtoro1
    # /usr/cluster/bin/clnas set -p "nodeIPs{ptop2}"=10.134.112.70 qualtoro1
    

Adding the Project to the Configuration

  1. Execute the following commands on one of the cluster nodes, for example, on ptop1 or ptop2.

    In the following example, the head called qualtoro2 is used.

    # /usr/cluster/bin/clnas find-dir qualtoro2
    === NAS Devices ===
    
    Nas Device:                                   qualtoro2
      Type:                                            sun_uss
      Unconfigured Project:                 pool-1/local/test-project
    
    # /usr/cluster/bin/clnas add-dir -d pool-1/local/test-project qualtoro2
    
  2. Execute the same command on one of the cluster nodes for the other head.

    In the following example, the head called qualtoro1 is used.

    # /usr/cluster/bin/clnas find-dir qualtoro1
    === NAS Devices ===
    
    Nas Device:                                    qualtoro1
      Type:                                             sun_uss
      Unconfigured Project:                 pool-0/local/test-project-q1
    
    # /usr/cluster/bin/clnas add-dir -d pool-0/local/test-project-q1 qualtoro1
    

Verifying the Configuration

Execute the command shown in Listing 5 on any of the cluster nodes, for example, on ptop1 or ptop2.

Listing 5: Verifying the Configuration
# /usr/cluster/bin/clnas show -v -d all

=== NAS Devices ===

Nas Device:                   qualtoro2
 Type:                        sun_uss
 userid:                      osc_agent
 nodeIPs{ptop1}:              10.134.113.69
 nodeIPs{ptop2}:              10.134.113.70
 Project:                     pool-1/local/test-project
 File System:                 /export/nfs-for-test

Nas Device:                   qualtoro1
  Type:                       sun_uss
  userid:                     osc_agent
  nodeIPs{ptop1}:             10.134.112.69
  nodeIPs{ptop2}:             10.134.112.70
  Project:                    pool-0/local/test-project-q1
  File System:                /export/nfs-for-test-q1

Using the File System

Now the file system can be used. Perform the following steps.

  1. Create a mountpoint for the file system on all the cluster nodes:

    # mkdir -p /qualtoro2/export/nfs-for-test
    # mkdir -p /qualtoro1/export/nfs-for-test-q1
    
  2. Add an entry to the /etc/vfstab file on all the cluster nodes, as shown in Listing 6.

    Listing 6: Adding an Entry to the /etc/vfstab File
    # echo "qualtoro2:/export/nfs-for-test - /qualtoro2/export/nfs-for-test nfs - no \
    rw,suid,bg,forcedirectio,rsize=32768,wsize=32768,hard,noac,proto=tcp,vers=3,nointr" >> /etc/vfstab
    # tail /etc/vfstab
    ...
    qualtoro2:/export/nfs-for-test - /qualtoro2/export/nfs-for-test nfs - no
    rw,suid,bg,forcedirectio,rsize=32768,wsize=32768,hard,noac,proto=tcp,vers=3,nointr
    # echo "qualtoro1:/export/nfs-for-test-q1 - /qualtoro1/export/nfs-for-test-q1 nfs - no \
    rw,suid,bg,rsize=32768,wsize=32768,hard,noac,proto=tcp,vers=3,nointr" >> /etc/vfstab
    # tail /etc/vfstab
    ...
    qualtoro1:/export/nfs-for-test-q1 - /qualtoro1/export/nfs-for-test-q1 - no
    rw,suid,bg,rsize=32768,wsize=32768,hard,noac,proto=tcp,vers=3,nointr
    

    In this example, the file system /qualtoro1/export/nfs-for-test-q1 is for the Oracle binaries and the file system /qualtoro2/export/nfs-for-test is for Oracle Database files.

    Note that /qualtoro1/export/nfs-for-test-q1 doesn't have the mount option forcedirectio because it is going to be used for the Oracle Database binaries installation.

    The mount options are application-specific, so consult with your application vendors for specific options.

    For information about the mount point options for Oracle Database, see the following article on My Oracle Support (login required).

  3. Mount the added file systems on all the cluster nodes:

    # mount /qualtoro2/export/nfs-for-test
    # mount /qualtoro1/export/nfs-for-test-q1
    

Checking the Fencing Functionality

One way to quickly check that fencing is working is to proceed as follows:

  1. Boot one of the Oracle Solaris Cluster nodes in non-cluster mode.
  2. Verify that the file systems mounted from the NAS are read-only on that node:

    # cd /qualtoro2/export/nfs-for-test
    # touch testthis
    touch: cannot create testthis: Read-only file system  <----
    #
    # cd /qualtoro1/export/nfs-for-test-q1
    # touch testthis
    touch: cannot create testthis: Read-only file system  <---
    #
    
  3. Reboot that same node back in to cluster mode.
  4. Verify that the file systems are now read-writable on that node:

    # cd /qualtoro2/export/nfs-for-test
    # touch testthis
    # ls -l testthis                           <------------- It is writable
    -rw-r--r-- 1 root root 0 May 5 23:08 testthis
    # rm testthis
    # cd /qualtoro1/export/nfs-for-test -q1
    # touch testthis
    # ls -l testthis                            <------------ It is writable
    -rw-r--r-- 1 root root 0 May 5 23:15 testthis
    # rm testthis
    

Adding the Storage Resource Group and Resources

After the NAS device is installed and configured, you can use the ScalMountPoint resource to configure failover and scalable applications.

An instance of the ScalMountPoint resource type represents the mount point of one of the following types of file systems:

  • QFS shared file systems
  • File systems on a NAS device

The NAS device and the file systems must already be configured for use with Oracle Solaris Cluster.

The ScalMountPoint resource type is a scalable resource type. An instance of this resource type is online on each node in the node list of the resource group that contains the resource.

  1. To configure a ScalMountPoint resource, execute the following commands:

    # /usr/cluster/bin/clrg create -S  scal_mnt_rg
    # /usr/cluster/bin/clrt register SUNW.ScalMountPoint
    # /usr/cluster/bin/clrs create -d -g scal_mnt_rg -t SUNW.ScalMountPoint -x
    MountPointDir=/qualtoro1/export/nfs-for-test-q1 -x FileSystemType=nas -x
    TargetFileSystem=qualtoro1:/export/nfs-for-test-q1 nfs-for-test-q1-rs
    # /usr/cluster/bin/clrs create -d -g scal_mnt_rg -t SUNW.ScalMountPoint -x
    MountPointDir=/qualtoro2/export/nfs-for-test -x FileSystemType=nas -x
    TargetFileSystem=qualtoro2:/export/nfs-for-test nfs-for-test-rs
    # /usr/cluster/bin/clrg online -eM scal_mnt_rg
    
  2. To configure specific applications to work with ScalMountPoint resources, run commands such as those shown in Listing 7.

    The example in Listing 7 shows how to configure Oracle server and listener resources with ScalMountPoint resources. As mentioned earlier, Oracle binaries and Oracle Database files have been deployed on the Sun ZFS Storage Appliance. The Oracle binaries are on a file system that belongs to the first head, qualtoro1 (that is, /qualtoro1/export/nfs-for-test-q1), and the database files are on a file system that belongs to the second head, qualtoro2 (that is, /qualtoro2/export/nfs-for-test).

    Listing 7: Configuring Applications to Work with ScalMountPoint Resources
    # /usr/cluster/bin/clrg create -p rg_affinities=++scal_mnt_rg  oracle-rg
    # /usr/cluster/bin/clrt register SUNW.oracle_server
    # /usr/cluster/bin/clrt register SUNW.oracle_listener
    # /usr/cluster/bin/clreslogicalhostname create -g oracle-rg -h top-1 top-1
    # /usr/cluster/bin/clrs create -g oracle-rg -t SUNW.oracle_server -x Oracle_sid=DB2  \
    -x Oracle_home=/qualtoro1/export/nfs-for-test-q1/11g/product/11.2.0/dbhome_1 \
    -x Alert_log_file=/qualtoro1/export/nfs-for-test-q1/11g/diag/rdbms/db2/DB2/trace/alert_DB2.log \
    -x Parameter_file=/qualtoro2/export/nfs-for-test/app/oracle/admin/DB2/pfile/init.ora.7302010122833 \
    -x Connect_string=homer/simpson -y Resource_dependencies_offline_restart=nfs-for-test-rs,nfs-for-test-q1-rs  \
    test-orcl-server
    # /usr/cluster/bin/clrs create  -g oracle-rg -t SUNW.oracle_listener  \
    -x  Oracle_home=/qualtoro1/export/nfs-for-test-q1/11g/product/11.2.0/dbhome_1   \
    -x Listener_name=LISTENER_DB2 \
    -y Resource_dependencies_offline_restart=nfs-for-test-q1-rs  test-orcl-lsnr
    # /usr/cluster/bin/clrg online -eM oracle-rg
    

Please refer the section "Configuring Failover and Scalable Applications" in Chapter 2, "Administering Data Service Resources," in the Oracle Solaris Cluster Data Services Planning and Administration Guide for further details.

Checking the Status of the Sun ZFS Storage Appliance

This section shows how to check the status of the Oracle Sun ZFS Storage Appliance from the BUI after configuring projects and file systems.

  1. Open a browser using the IP address of one of the heads or its resolved fully qualified name with port 215.
  2. Log in as user root, as shown in Figure 2.
Figure 2

Figure 2. Logging In to the BUI

Figure 3 shows how the project test-project-q1 on qualtoro1 looks.

Figure 3

Figure 3. Project test-project-q1 on qualtoro1

Figure 4 shows how the file system /export/nfs-for-test-q1 configured with the project test-project-q1 looks on qualtoro1.

Figure 4

Figure 4. File system /export/nfs-for-test-q1 on qualtoro1

Figure 5 shows how the networking and storage pools look on the Sun ZFS Storage Appliance after the active-active clustered storage is configured.

Figure 5

Figure 5. Networking and Storage Pools on the Sun ZFS Storage Appliance

Resources

Here are resources referenced earlier in this document:

And here are some additional resources:

Revision 1.0, 11/16/2011

Follow us on Facebook, Twitter, or Oracle Blogs.