What You See Is What You Get Element

Creating the Zone Cluster

Part IV of How to Upgrade to Oracle Solaris Cluster 4.0

by Tim Read

Published May 2012

Part I - Overview of the Example Configuration
Part II - Configuring the Oracle Database for Clustering
Part III - Installing the Target Cluster
Part IV - Creating the Zone Cluster
Part V - Installing the New Application Software Stack
Part VI - Creating the Standby Database
Part VII - Creating the Oracle Solaris Cluster Geographic Edition Configuration
Part VIII - How Oracle Solaris Cluster Geographic Edition Simplifies the Upgrade Process

The next step is to create the zone cluster in which the new database is contained. The zpool and the logical host name resources to which the zone cluster has access are added in a later step.

Caution: If you choose to use this article as a guide for performing a similar process, you need to pay close attention to the nodes on which the individual commands are run. For that reason, the system prompts shown in the example steps include both the node name and the user name to indicate both where, and as whom, a command must be run.

The zonepath for the root file system of the zone cluster must not be placed in the root file system of the global cluster, so first create a separate file system for this before creating the zone cluster itself, as shown below. Repeat this command on all of the new cluster nodes.

ppyrus1 (root) # zfs create -o mountpoint=/zones rpool/zones

ppyrus2 (root) # zfs create -o mountpoint=/zones rpool/zones


After creating all the zonepath file systems, create and install the zone cluster, as shown in Listing 1. The commands in Listing 1 need to be run only once from one cluster node. The information about the new zone cluster is automatically propagated to all other cluster nodes via the cluster configuration repository (CCR).

ppyrus1 (root) # export PATH=$PATH:/usr/cluster/bin
ppyrus1 (root) # clzc configure oracle-zc
oracle-zc: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.
clzc:oracle-zc> create 
clzc:oracle-zc> set zonepath=/zones/oracle-zc
clzc:oracle-zc> set autoboot=true
clzc:oracle-zc> add node
clzc:oracle-zc:node> set physical-host=ppyrus1
clzc:oracle-zc:node> set hostname=vzpyrus3a
clzc:oracle-zc:node> add net
clzc:oracle-zc:node:net> set address=vzpyrus3a/24
clzc:oracle-zc:node:net> set physical=net0
clzc:oracle-zc:node:net> end
clzc:oracle-zc:node> end
clzc:oracle-zc> add node
clzc:oracle-zc:node> set physical-host=ppyrus2
clzc:oracle-zc:node> set hostname=vzpyrus3b
clzc:oracle-zc:node> add net
clzc:oracle-zc:node:net> set address=vzpyrus3b/24
clzc:oracle-zc:node:net> set physical=net0
clzc:oracle-zc:node:net> end
clzc:oracle-zc:node> end
clzc:oracle-zc> set limitpriv=default,proc_priocntl
clzc:oracle-zc> set max-shm-memory=4294967296
clzc:oracle-zc> verify
clzc:oracle-zc> commit
clzc:oracle-zc> exit
ppyrus1 (root) # 
ppyrus1 (root) # clzc status

=== Zone Clusters ===

--- Zone Cluster Status ---

Name        Node Name   Zone HostName   Status    Zone Status
----        ---------   -------------   ------    -----------
oracle-zc   ppyrus1     vzpyrus3a       Offline   Configured
            ppyrus2     vzpyrus3b       Offline   Configured

ppyrus1 (root) # clzc install oracle-zc
Waiting for zone install commands to complete on all the nodes of the zone cluster "oracle-zc"...
ppyrus1 (root) #

Listing 1. Installing the Zone Cluster

Because the installation command was run from a pseudo terminal rather than the console, the output (see Listing 2) is directed to the console window. If any errors occur during zone cluster creation, the console output is a good place to start the diagnostic process.

A ZFS file system has been created for this zone.
Progress being logged to /var/log/zones/zoneadm.20120117T104437Z.oracle-zc.install
       Image: Preparing at /zones/oracle-zc/root.

 Install Log: /system/volatile/install.4100/install_log
 AI Manifest: /tmp/manifest.xml.xSa4.h
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: oracle-zc
Installation: Starting ...

              Creating IPS image
              Installing packages from:
                  solaris
                      origin:  http://pkg.oracle.com/solaris/release/
                  ha-cluster
                      origin:  http://pkg.oracle.com/ha-cluster/release/
DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                              249/249 42292/42292  305.0/305.0

PHASE                                        ACTIONS
Install Phase                            57897/57897 

PHASE                                          ITEMS
Package State Update Phase                   249/249 
Image State Update Phase                         2/2 
Installation: Succeeded

        Note: Man pages can be obtained by installing pkg:/system/manual

 done.

        Done: Installation completed in 728.576 seconds.


  Next Steps: Boot the zone, then log into the zone console (zlogin -C)

              to complete the configuration process.

Log saved in non-global zone as /zones/oracle-zc/root/var/log/zones/zoneadm.20120117T104437Z.oracle-zc.install

Listing 2. Console Output

After the package installation phase is complete and the zone cluster is booted, finish the zone cluster installation by logging in to each of the zone cluster nodes (remember to use zlogin -C zone-cluster-name) and providing the details prompted for by the menu system. The output for these steps is omitted from Listing 3, but the output simply repeats what normally appears during a standard Oracle Solaris 11 installation.

ppyrus1 (root) # clzc status

=== Zone Clusters ===

--- Zone Cluster Status ---

Name        Node Name   Zone HostName   Status    Zone Status
----        ---------   -------------   ------    -----------
oracle-zc   ppyrus1     vzpyrus3a       Offline   Installed
            ppyrus2     vzpyrus3b       Offline   Installed

ppyrus1 (root) # clzc boot oracle-zc
Waiting for zone boot commands to complete on all the nodes of the zone cluster "oracle-zc"...
ppyrus1 (root) # zlogin -C oracle-zc
...

Listing 3. Checking the Status of the Zone Cluster

Now that the zone cluster is running, you need to prepare for the next stage of the process in which you install the database software and restore a copy of the production database. To do that, you must perform the steps that add the zpool and the logical host names to the zone cluster configuration.

Once again, create the zpool from the available storage and set its mount point to be /oradata:
ppyrus1 (root) # zpool create -f -m /oradata orapool \
/dev/did/dsk/d1s0 /dev/did/dsk/d2s0 \
/dev/did/dsk/d3s0 /dev/did/dsk/d4s0

Add the zpool resource, together with the two logical host name resources (one for the Oracle Database service and one for Oracle Solaris Cluster Geographic Edition), to the zone cluster configuration, as shown in Listing 4.

ppyrus1 (root) # clzc configure oracle-zc
clzc:oracle-zc> add dataset 
clzc:oracle-zc:dataset> set name=orapool
clzc:oracle-zc:dataset> end
clzc:oracle-zc> add net
clzc:oracle-zc:net> set address=vzpyrus1a
clzc:oracle-zc:net> end
clzc:oracle-zc> add net
clzc:oracle-zc:net> set address=vzpyrus1b
clzc:oracle-zc:net> end
clzc:oracle-zc> verify
clzc:oracle-zc> commit
clzc:oracle-zc> exit

Listing 4. Adding Resources to the Zone Cluster

Before creating the logical host name resource, update the local /etc/hosts files on each zone cluster node to ensure that they are not reliant on external name services. Then, log in to the zone cluster and create the new Oracle resource group that controls the Oracle database, as shown in Listing 5.

vzpyrus3a (root) # cat /etc/hosts
::1 localhost
127.0.0.1 localhost loghost
10.134.108.122	vzpyrus3a # Cluster Node
10.134.108.123  vzpyrus3b # Cluster Node

# Virtual IP for Oracle DB on pyrus
10.134.108.111  vzpyrus1a oracle-pyrus-lh

# Virtual IP for Geo Edition 
10.134.108.109  vzgyruss2a gyruss

# Virtual IP for Geo Edition in zone cluster
10.134.108.112  vzpyrus1b oracle-zc 
vzpyrus3a (root) #
vzpyrus3a (root) # export PATH=$PATH:/usr/cluster/bin
vzpyrus3a (root) # clrg create -n vzpyrus3a,vzpyrus3b oracle-rg
vzpyrus3a (root) # clrslh create -g oracle-rg -h vzpyrus1a oracle-lh-rs
vzpyrus3a (root) # clrt register SUNW.HAStoragePlus
vzpyrus3a (root) # clrs create -g oracle-rg -t SUNW.HAStoragePlus \
-p zpools=orapool oracle-hasp-rs
vzpyrus3a (root) # clrg online -emM oracle-rg (C348385) WARNING: Cannot enable monitoring on resource oracle-lh-rs because it already has monitoring enabled. To force the monitor to restart, disable monitoring using 'clresource unmonitor oracle-lh-rs' and re-enable monitoring using 'clresource monitor oracle-lh-rs'. (C348385) WARNING: Cannot enable monitoring on resource oracle-hasp-rs because it already has monitoring enabled. To force the monitor to restart, disable monitoring using 'clresource unmonitor oracle-hasp-rs' and re-enable monitoring using 'clresource monitor oracle-hasp-rs'. vzpyrus3a (root) # vzpyrus3a (root) # ping vzpyrus1a vzpyrus1a is alive vzpyrus3a (root) # vzpyrus3a (root) # df -h /oradata Filesystem Size Used Available Capacity Mounted on orapool 73G 27K 73G 1% /oradata

Listing 5. Checking /etc/hosts and Creating the Resource Group

Revision 1.0, 05/01/2012

Follow us on Facebook, Twitter, or Oracle Blogs.