What You See Is What You Get Element

Creating the Oracle Solaris Cluster Geographic Edition Configuration

Part VII of How to Upgrade to Oracle Solaris Cluster 4.0

by Tim Read

Published May 2012

Part I - Overview of the Example Configuration
Part II - Configuring the Oracle Database for Clustering
Part III - Installing the Target Cluster
Part IV - Creating the Zone Cluster
Part V - Installing the New Application Software Stack
Part VI - Creating the Standby Database
Part VII - Creating the Oracle Solaris Cluster Geographic Edition Configuration
Part VIII - How Oracle Solaris Cluster Geographic Edition Simplifies the Upgrade Process

To create the Oracle Solaris Cluster Geographic Edition configuration, first, ensure that you have the correct packages installed on both clusters. The packages are present on your Oracle Solaris 11 cluster because you installed the ha-cluster-full group which, in turn, contains the ha-cluster-geo-full group, as shown in Listing 1.

Caution: If you choose to use this article as a guide for performing a similar process, you need to pay close attention to the nodes on which the individual commands are run. For that reason, the system prompts shown in the example steps include both the node name and the user name to indicate both where, and as whom, a command must be run.

vzpyrus3a (root) # pkg info ha-cluster-geo-full
          Name: ha-cluster/group-package/ha-cluster-geo-full
       Summary: Oracle Solaris Cluster Geographic Edition full group package
   Description: Oracle Solaris Cluster Geographic Edition full group package
      Category: Meta Packages/Group Packages
         State: Installed
     Publisher: ha-cluster
       Version: 4.0.0
 Build Release: 5.11
        Branch: 0.22
Packaging Date: Sat Oct 22 07:28:36 2011
          Size: 5.53 kB
          FMRI: pkg: //ha-cluster/ha-cluster/group-package/ha-cluster-geo-full@4.0.0,
5.11-0.22:20111022T072836Z

Listing 1. Checking Packages on the Oracle Solaris 11 Cluster

Check your Oracle Solaris 10 cluster for the Oracle Solaris Cluster Geographic Edition packages, as shown in Listing 2.

pgyruss1 (root) # pkginfo | grep  Geographic
application SUNWscgctl        Oracle Solaris Cluster Geographic Edition Control Module (Usr)
application SUNWscgctlr       Oracle Solaris Cluster Geographic Edition Control Module (Root)
application SUNWscghb         Oracle Solaris Cluster Geographic Edition Heartbeats (Usr)
application SUNWscghbr        Oracle Solaris Cluster Geographic Edition Heartbeats (Root)
application SUNWscgman        Oracle Solaris Cluster Geographic Edition Manual Pages
application SUNWscgrepavs     Oracle Solaris Cluster Geographic Edition Availability Suite Data 
                              Replication (Opt)
application SUNWscgrepavsu    Oracle Solaris Cluster Geographic Edition Availability Suite Data 
                              Replication (Usr)
application SUNWscgrepodg     Oracle Solaris Cluster Geographic Edition Oracle Data Guard Data 
                              Replication (Opt)
application SUNWscgrepodgu    Oracle Solaris Cluster Geographic Edition Oracle Data Guard Data  
                              Replication (Usr)
application SUNWscgrepsbpu    Oracle Solaris Cluster Geographic Edition Script Based Plug-in 
                              Replication (Usr)
application SUNWscgrepsrdf    Oracle Solaris Cluster Geographic Edition SRDF Data Replication (Opt)
application SUNWscgrepsrdfu   Oracle Solaris Cluster Geographic Edition SRDF Data Replication (Usr)
application SUNWscgreptc      Oracle Solaris Cluster Geographic Edition for Hitachi TrueCopy Data  
                              Replication (Opt)
application SUNWscgreptcu     Oracle Solaris Cluster Geographic Edition for Hitachi TrueCopy Data 
                              Replication (Usr)
application SUNWscgspm        Oracle Solaris Cluster Geographic Edition - Oracle Solaris Cluster 
                              Manager Extensions

Listing 2. Checking Packages on the Oracle Solaris 10 Cluster

On both Oracle Solaris 10 cluster nodes, also check that the Common Agent Container is running on all your cluster nodes and that it is listening on all network addresses:

pgyruss1 (root) # svcs -a | grep -i common
online         Jan_12   svc:/application/management/common-agent-container-2:default
uninitialized  Jan_12   svc:/application/management/common-agent-container-1:default
pgyruss1 (root) # cacaoadm list-params | grep network-bind-address
network-bind-address=0.0.0.0

Because we chose to use a zone cluster as the target system, you must also manually synchronize the Common Agent Container keys across the zone cluster nodes and ensure that you have ssh enabled on the remote site. Perform these tasks on both of the Oracle Solaris 11 zone cluster nodes, as shown in Listing 3.

vzpyrus3a (root) # svcs -a | grep common
disabled        2:49:12 svc:/application/management/common-agent-container-1:default
vzpyrus3a (root) # svcadm enable common-agent-container-1
vzpyrus3a (root) # svcs -a | grep common
online          6:29:01 svc:/application/management/common-agent-container-1:default
vzpyrus3a (root) # cacaoadm list-params | grep network-bind-address
network-bind-address=127.0.0.1
vzpyrus3a (root) # cacaoadm stop
vzpyrus3a (root) # cacaoadm set-param network-bind-address=0.0.0.0
vzpyrus3a (root) # cacaoadm create-keys --force
vzpyrus3a (root) # tar cf /tmp/SECURITY.tar /etc/cacao/instances/default/security


vzpyrus3a (root) # scp /tmp/SECURITY.tar root@vzpyrus3b:/tmp
Password: 
SECURITY.tar         100% |********************************************|   983 KB    00:00  
vzpyrus3a (root) # cacaoadm start
vzpyrus3a (root) # cacaoadm status
default instance is ENABLED at system startup. 
Smf monitoring process: 
16853
16854
Uptime: 0 day(s), 0:0

vzpyrus3b (root) # cacaoadm stop 
vzpyrus3b (root) # cacaoadm set-param network-bind-address=0.0.0.0
vzpyrus3b (root) # tar xf /tmp/SECURITY.tar > /dev/null 2>&1
vzpyrus3b (root) # rm /tmp/SECURITY.tar 
vzpyrus3b (root) # cacaoadm start

Listing 3. Synchronizing the Common Agent Container Keys and Checking ssh

Confirm that you have entries in your /etc/hosts file that match each of your cluster names, and then start Oracle Solaris Cluster Geographic Edition on both clusters, as shown in Listing 4.

Note: The entries can also be in the name service, but you cannot rely on the external name service always being accessible.

pgyruss1 (root) # egrep " gyruss| oracle-zc" /etc/hosts
10.134.108.109  vzgyruss2a gyruss
10.134.108.112  vzpyrus1b oracle-zc 
pgyruss1 (root) # geoadm start
... checking for management agent ...
... management agent check done ....
... starting product infrastructure ... please wait ...
Registering resource type <SUNW.HBmonitor>...done.
Registering resource type <SUNW.SCGeoInitSvc>...done.
Resource type <SUNW.scmasa> has been registered already
Resource type <SUNW.SCGeoZC> has been registered already
Creating scalable resource group <geo-clusterstate>...done.
Creating service tag management resource <geo-servicetag>...
Service tag management resource created successfully ....
Creating failover resource group <geo-infrastructure>...done.
Creating logical host resource <geo-clustername>...
Logical host resource created successfully ....
Creating resource <geo-hbmonitor> ...done.
Creating resource <geo-failovercontrol> ...done.
Bringing RG <geo-clusterstate> to managed state ...done.
Bringing resource group <geo-infrastructure> to managed state ...done.
Enabling resource <geo-clustername> ...done.
Enabling resource <geo-hbmonitor> ...done.
Enabling resource <geo-failovercontrol> ...done.
Node pgyruss1: Bringing resource group <geo-infrastructure> online ...done.

Oracle Solaris Cluster Geographic Edition infrastructure started successfully. 

pgyruss1 (root) # clrs status

=== Cluster Resources ===

Resource Name         Node Name   State                  Status Message
-------------         ---------   -----                  --------------
oracle-svr-rs         pgyruss1    Offline                Offline
                      pgyruss2    Online                 Online

oracle-lsnr-rs        pgyruss1    Offline                Offline
                      pgyruss2    Online                 Online

oracle-hasp-rs        pgyruss1    Offline                Offline
                      pgyruss2    Online                 Online

oracle-lh-rs          pgyruss1    Offline                Offline - LogicalHostname offline.
                      pgyruss2    Online                 Online - LogicalHostname online.

geo-servicetag        pgyruss1    Online_not_monitored   Online_not_monitored
                      pgyruss2    Online_not_monitored   Online_not_monitored

geo-failovercontrol   pgyruss1    Online                 Online
                      pgyruss2    Offline                Offline

geo-hbmonitor         pgyruss1    Online                 Online - Daemon OK
                      pgyruss2    Offline                Offline

geo-clustername       pgyruss1    Online                 Online - LogicalHostname online.
                      pgyruss2    Offline                Offline

vzpyrus3a (root) # egrep " gyruss| oracle-zc" /etc/hosts
10.134.108.109  vzgyruss2a gyruss
10.134.108.112  vzpyrus1b oracle-zc 
vzpyrus3a (root) # geoadm start
... checking for management agent ...
... management agent check done ....
... starting product infrastructure ... please wait ...
Registering resource type <SUNW.HBmonitor>...done.
Registering resource type <SUNW.SCGeoInitSvc>...done.
Registering resource type <SUNW.scmasa>...done.
Registering resource type <SUNW.SCGeoZC>...done.
Creating scalable resource group <geo-clusterstate>...done.
Creating service tag management resource <geo-servicetag>...
Service tag management resource created successfully ....
Creating failover resource group <geo-infrastructure>...done.
Creating logical host resource <geo-clustername>...
Logical host resource created successfully ....
Creating resource <geo-hbmonitor> ...done.
Creating resource <geo-failovercontrol> ...done.
Bringing RG <geo-clusterstate> to managed state ...done.
Bringing resource group <geo-infrastructure> to managed state ...done.
Enabling resource <geo-clustername> ...done.
Enabling resource <geo-hbmonitor> ...done.
Enabling resource <geo-failovercontrol> ...done.
Node vzpyrus3a: Bringing resource group <geo-infrastructure> online ...done.

Oracle Solaris Cluster Geographic Edition infrastructure started successfully. 

vzpyrus3a (root) # clrs status

=== Cluster Resources ===

Resource Name         Node Name   State                  Status Message
-------------         ---------   -----                  --------------
oracle-svr-rs         vzpyrus3a   Online                 Online
                      vzpyrus3b   Offline                Offline

oracle-lsnr-rs        vzpyrus3a   Online                 Online
                      vzpyrus3b   Offline                Offline

oracle-hasp-rs        vzpyrus3a   Online                 Online
                      vzpyrus3b   Offline                Offline

oracle-lh-rs          vzpyrus3a   Online                 Online - LogicalHostname online.
                      vzpyrus3b   Offline                Offline - LogicalHostname offline.

geo-zc-sysevent       vzpyrus3a   Online_not_monitored   Online_not_monitored
                      vzpyrus3b   Online_not_monitored   Online_not_monitored

geo-servicetag        vzpyrus3a   Online_not_monitored   Online_not_monitored
                      vzpyrus3b   Online_not_monitored   Online_not_monitored

geo-failovercontrol   vzpyrus3a   Online                 Online
                      vzpyrus3b   Offline                Offline

geo-hbmonitor         vzpyrus3a   Online                 Online - Daemon OK
                      vzpyrus3b   Offline                Offline

geo-clustername       vzpyrus3a   Online                 Online - LogicalHostname online.
                      vzpyrus3b   Offline                Offline

Listing 4. Checking /etc/hosts File and Starting Oracle Solaris Cluster Geographic Edition

The next step in building the Oracle Solaris Cluster Geographic Edition configuration is creating what is termed a partnership between the clusters. To be able to do this, both clusters must "trust" each other, because the communication between the clusters takes place over a secure link.

Perform the commands shown in Listing 5 on only one node of each of your clusters.

Note: The clusters in a partnership can, and often might be, in different DNS domains. If this is the case in your setup, check the Oracle Solaris Cluster Geographic Edition documentation for the correct command syntax to use.

pgyruss1 (root) # geoadm status

   Cluster:  gyruss
*** No partnership defined on local cluster "gyruss" *** 

pgyruss1 (root) # geops add-trust --cluster oracle-zc

Local cluster : gyruss
Local node : pgyruss1

Cleaning up certificate files in /etc/cacao/instances/default/security/jsse on pgyruss1

Retrieving certificates from oracle-zc ... Done

New Certificate:

Owner: CN=vzpyrus3a_agent
Issuer: CN=vzpyrus3a_ca
Serial number: 25d4a814
Valid from: Thu Jun 19 08:43:00 PDT 1969 until: Mon Jan 19 07:43:00 PST 2032
Certificate fingerprints:
         MD5:  C9:71:18:6C:FE:AA:40:F7:FE:08:F7:99:DB:01:1F:F2
         SHA1: 09:96:91:1E:8C:7C:C7:91:57:3D:9D:BB:57:89:5D:E7:A5:5C:B8:C3
         Signature algorithm name: SHA1withRSA
         Version: 3

Extensions: 

#1: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
  CA:false
  PathLen: undefined
]

Do you trust this certificate? [y/n] y

Adding certificate to truststore on pgyruss1 ... Done

Adding certificate to truststore on pgyruss2 ... Done

New Certificate:

Owner: CN=vzpyrus3a_ca
Issuer: CN=vzpyrus3a_ca
Serial number: 6f783dfd
Valid from: Thu Jun 19 08:42:58 PDT 1969 until: Mon Jan 19 07:42:58 PST 2032
Certificate fingerprints:
         MD5:  D4:57:E8:5B:67:C9:25:9D:07:18:2E:4E:C3:30:D5:11
         SHA1: 6B:BC:84:11:DD:F2:3B:59:4D:B6:9C:20:2E:D9:15:1F:06:B5:D6:D1
         Signature algorithm name: SHA1withRSA
         Version: 3

Extensions: 

#1: ObjectId: 2.5.29.19 Criticality=true
BasicConstraints:[
  CA:true
  PathLen:1
]

Do you trust this certificate? [y/n] y

Adding certificate to truststore on pgyruss1 ... Done

Adding certificate to truststore on pgyruss2 ... Done

Operation completed successfully. All certificates are added to truststore on nodes of cluster gyruss

vzpyrus3a (root) # geops add-trust --cluster gyruss

Local cluster : oracle-zc
Local node : vzpyrus3a

Cleaning up certificate files in /etc/cacao/instances/default/security/jsse on vzpyrus3a

Retrieving certificates from gyruss ... Done

New Certificate:

Owner: CN=pgyruss1_agent
Issuer: CN=pgyruss1_ca
Serial number: 71fd3e5d
Valid from: Tue May 27 08:44:19 GMT-08:00 1969 until: Sun Jul 27 08:44:19 GMT-08:00 2031
Certificate fingerprints:
	 MD5:  31:46:B2:9A:04:53:E8:58:22:A5:09:46:FB:05:7F:9A
	 SHA1: 78:93:9F:D6:A1:53:74:47:BD:F9:84:7D:4A:60:1B:65:13:07:5B:B0
	 Signature algorithm name: SHA1withRSA
	 Version: 3

Do you trust this certificate? [y/n] y

Adding certificate to truststore on vzpyrus3a ... Done

Adding certificate to truststore on vzpyrus3b ... Done

New Certificate:

Owner: CN=pgyruss1_ca
Issuer: CN=pgyruss1_ca
Serial number: 56602b49
Valid from: Tue May 27 08:44:17 GMT-08:00 1969 until: Sun Jul 27 08:44:17 GMT-08:00 2031
Certificate fingerprints:
	 MD5:  A1:FE:EB:7B:BC:F0:28:B2:20:AD:DB:41:96:0B:04:06
	 SHA1: 7D:F1:5A:D2:19:DB:89:92:D8:E9:5A:D0:90:38:FC:C5:E2:8F:EF:F3
	 Signature algorithm name: SHA1withRSA
	 Version: 3

Do you trust this certificate? [y/n] y

Adding certificate to truststore on vzpyrus3a ... Done

Adding certificate to truststore on vzpyrus3b ... Done

Operation completed successfully. All certificates are added to truststore on
nodes of cluster oracle-zc

Listing 5. Creating Trust Between the Clusters

Create the partnership on one cluster and then join the partnership from the other cluster, as follows. It does not matter which cluster defines the initial partnership.

pgyruss1 (root) # geops create -c oracle-zc migration-ps
Partnership between local cluster "gyruss" and remote cluster "oracle-zc" successfully created.

vzpyrus3a (root) # geops join-partnership gyruss migration-ps
Local cluster "oracle-zc" is now partner of cluster "gyruss".

Wait for the clusters to synchronize and then check their status, as shown in Listing 6.

vzpyrus3a (root) # geoadm status

   Cluster:  oracle-zc

   Partnership "migration-ps"	: OK
	Partner clusters	: gyruss
	Synchronization		: OK
	ICRM Connection		: OK

	Heartbeat "hb_oracle-zc~gyruss" monitoring "gyruss": OK
	     Plug-in "ping_plugin"  	: Inactive
	     Plug-in "tcp_udp_plugin"	: OK

Listing 6: Checking the Cluster Status

If you want to use the Oracle Wallet Manager to allow the database administrator account (sys) to control switchovers, add this account to the wallet at both sites, as shown in Listing 7. This process uses the _geoadm suffixed services in tnsnames.ora that you created in an earlier step. Then repeat the process on the zone cluster node where the oracle-rg is online.

pgyruss2 (root) # clrs status

=== Cluster Resources ===

Resource Name         Node Name   State                  Status Message
-------------         ---------   -----                  --------------
oracle-svr-rs         pgyruss1    Offline                Offline
                      pgyruss2    Online                 Online

oracle-lsnr-rs        pgyruss1    Offline                Offline
                      pgyruss2    Online                 Online

oracle-hasp-rs        pgyruss1    Offline                Offline
                      pgyruss2    Online                 Online

oracle-lh-rs          pgyruss1    Offline                Offline - LogicalHostname offline.
                      pgyruss2    Online                 Online - LogicalHostname online.

geo-servicetag        pgyruss1    Online_not_monitored   Online_not_monitored
                      pgyruss2    Online_not_monitored   Online_not_monitored

geo-failovercontrol   pgyruss1    Online                 Online - Service is online.
                      pgyruss2    Offline                Offline

geo-hbmonitor         pgyruss1    Online                 Online - Daemon OK
                      pgyruss2    Offline                Offline

geo-clustername       pgyruss1    Online                 Online - LogicalHostname online.
                      pgyruss2    Offline                Offline

pgyruss2 (root) # su - oracle
Oracle Corporation      SunOS 5.10      Generic Patch   January 2005

pgyruss2 (oracle) $ mkstore -wrl /oradata/wallet -createCredential sales_geoadm sys 
Oracle Secret Store Tool : Version 11.2.0.3.0 - Production
Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line 
Enter your secret/Password:       
Re-enter your secret/Password:      
   
Enter wallet password:         
   
Create credential oracle.security.client.connect_string2
pgyruss2 (oracle) $ mkstore -wrl /oradata/wallet -createCredential salesdr_geoadm sys 
Oracle Secret Store Tool : Version 11.2.0.3.0 - Production
Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line 
Enter your secret/Password:       
   
Re-enter your secret/Password:        
Enter wallet password:         
   
Create credential oracle.security.client.connect_string3
pgyruss2 (oracle) $ mkstore -wrl /oradata/wallet -createEntry \ 
oracle.security.client.default_username sys
Oracle Secret Store Tool : Version 11.2.0.3.0 - Production Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved. Enter wallet password: pgyruss2 (oracle) $ mkstore -wrl /oradata/wallet -createEntry oracle.security.client.default_password Oracle Secret Store Tool : Version 11.2.0.3.0 - Production Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved. Your secret/Password is missing in the command line Enter your secret/Password: Re-enter your secret/Password: Enter wallet password:

Listing 7. Enabling the Wallet to Control Switchovers

Be sure to test that the Data Guard switchover process works when you connect using the /@sales_geoadm and /@salesdr_geoadm identifiers. Those checks are omitted here for brevity.

Oracle Solaris Cluster Geographic Edition uses a construct called a protection group to control the combination of one or more application resource groups and their associated replication technology. Therefore, you must create a protection group to control your Oracle database and its associated Data Guard replication, as shown in Listing 8. Call your protection group oracle-pg.

The geoadm and geopg commands can be run from any node within each cluster. For more information about installing and configuring the Oracle Solaris Cluster Geographic Edition software, see the Oracle Solaris Cluster Geographic Edition Installation Guide.

pgyruss1 (root) # geoadm status

   Cluster:  gyruss

   Partnership "migration-ps"   : OK
        Partner clusters        : oracle-zc
        Synchronization         : OK
        ICRM Connection         : OK

        Heartbeat "hb_gyruss~oracle-zc" monitoring "oracle-zc": OK
             Plug-in "ping_plugin"      : Inactive
             Plug-in "tcp_udp_plugin"   : OK
pgyruss1 (root) # geopg create --partnership migration-ps --role  primary --datarep-type odg oracle-pg
Protection group "oracle-pg" successfully created.
pgyruss1 (root) # geopg add-replication-component  \
-p local_database_name=sales \
-p remote_database_name=salesdr \
-p local_db_service_name=sales_geoadm \
-p remote_db_service_name=salesdr_geoadm \
-p standby_type=physical \
-p replication_mode=MaxPerformance \
-p sysdba_username= \
-p sysdba_password= \
-p local_oracle_svr_rg_name=oracle-rg \
-p remote_oracle_svr_rg_name=oracle-rg sales oracle-pg
Password for property sysdba_password : Oracle Data Guard configuration "sales" successfully added to the protection group "oracle-pg". pgyruss1 (root) # clrs status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- oracle-svr-rs pgyruss1 Offline Offline pgyruss2 Online Online oracle-lsnr-rs pgyruss1 Offline Offline pgyruss2 Online Online oracle-hasp-rs pgyruss1 Offline Offline pgyruss2 Online Online oracle-lh-rs pgyruss1 Offline Offline - LogicalHostname offline. pgyruss2 Online Online - LogicalHostname online. geo-servicetag pgyruss1 Online_not_monitored Online_not_monitored pgyruss2 Online_not_monitored Online_not_monitored geo-failovercontrol pgyruss1 Online Online - Service is online. pgyruss2 Offline Offline geo-hbmonitor pgyruss1 Online Online - Daemon OK pgyruss2 Offline Offline geo-clustername pgyruss1 Online Online - LogicalHostname online. pgyruss2 Offline Offline sales-odg-rep-rs pgyruss1 Offline Offline pgyruss2 Offline Offline sales-oracle-svr-shadow-rs pgyruss1 Offline Offline pgyruss2 Offline Offline pgyruss1 (root) # clrg status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ oracle-rg pgyruss1 No Offline pgyruss2 No Online geo-clusterstate pgyruss1 No Online pgyruss2 No Online geo-infrastructure pgyruss1 No Online pgyruss2 No Offline oracle-pg-odg-rep-rg pgyruss1 No Offline pgyruss2 No Online sales-rac-proxy-svr-shadow-rg pgyruss1 No Unmanaged pgyruss2 No Unmanaged

Listing 8. Creating a Protection Group

Once you have created the protection group on one cluster, you must propagate the configuration to your other cluster, as shown in Listing 9. When this process is complete, the Data Guard replication is in a disabled state to match the state of your newly created Oracle Solaris Cluster Geographic Edition protection group. We will re-enable the replication once we have added our example application resource group.

vzpyrus3b (root) # geoadm status

   Cluster:  oracle-zc

   Partnership "migration-ps"   : OK
        Partner clusters        : gyruss
        Synchronization         : OK
        ICRM Connection         : OK

        Heartbeat "hb_oracle-zc~gyruss" monitoring "gyruss": OK
             Plug-in "ping_plugin"      : Inactive
             Plug-in "tcp_udp_plugin"   : OK
vzpyrus3b (root) # geopg get --partnership migration-ps
Protection group "oracle-pg" successfully created.
vzpyrus3b (root) # geoadm status

   Cluster:  oracle-zc

   Partnership "migration-ps"   : OK
        Partner clusters        : gyruss
        Synchronization         : OK
        ICRM Connection         : OK

        Heartbeat "hb_oracle-zc~gyruss" monitoring "gyruss": OK
             Plug-in "ping_plugin"      : Inactive
             Plug-in "tcp_udp_plugin"   : OK

   Protection group "oracle-pg" : Unknown
        Partnership             : migration-ps
        Synchronization         : OK

        Cluster oracle-zc       : Unknown
             Role               : Secondary
             Activation state   : Deactivated
             Configuration      : OK
             Data replication   : Unknown
             Resource groups    : None

        Cluster gyruss          : Unknown
             Role               : Primary
             Activation state   : Deactivated
             Configuration      : OK
             Data replication   : Unknown
             Resource groups    : None

Listing 9. Propagating the Configuration to the Other Cluster

If applications that are dependent on the database are contained within resource groups, you can place them under Oracle Solaris Cluster Geographic Edition control, too, as shown in Listing 10. In Listing 10, we create an empty resource group called example-rg on each cluster that requires colocation with the Oracle resource group oracle-rg. We achieve this colocation using the RG_Affinities property, which we set after we add the resource group to the protection group.

Rather than making this colocation depend on the real Oracle resource group (oracle-rg), which must always remain online for replication to occur, we make it dependent on the shadow resource group that Oracle Solaris Cluster Geographic Edition creates for just such purposes. The shadow resource group follows the standard Oracle Solaris Cluster Geographic Edition convention in which the resource group is online on the primary cluster and offline on the secondary cluster.

vzpyrus3b (root) # geopg list

Protection Group: oracle-pg

  Partnership name              : migration-ps

  Local Role                    : Primary
  Deployed on clusters          : oracle-zc, gyruss
  Data Replication type         : ODG
  Configuration status          : OK
  Synchronization status        : OK
  Creation signature            : gyruss Jan 20, 2012 6:20:36 AM PST
  Last update                   : Jan 24, 2012 2:39:48 AM GMT-08:00

  Local properties              : None

  Global properties             : 

        Description               : 
        Timeout                   : 3600 seconds
        RoleChange_ActionCmd      : 
        External_Dependency_Allowed : false
        RoleChange_ActionArgs     : 

  *** No protected resource groups in protection group "oracle-pg" ***

  ODG Oracle Data Guard configurations:

        sales

        sales_remote_database_name : sales
        sales_local_database_name : salesdr
        sales_local_rac_proxy_svr_rg_name : oracle-rg
        sales_remote_rac_proxy_svr_rg_name : oracle-rg
        sales_sysdba_password     : ********
        sales_replication_mode    : maxperformance
        sales_remote_db_service_name : sales_geoadm
        sales_local_db_service_name : salesdr_geoadm
        sales_standby_type        : physical
        sales_sysdba_username     : 


vzpyrus3b (root) # clrg create -p auto_start_on_new_cluster=false example-rg
pgyruss1 (root) # clrg create -p auto_start_on_new_cluster=false example-rg

vzpyrus3b (root) # geopg add-resource-group example-rg,sales-rac-proxy-svr-shadow-rg \
oracle-pg
Following resource groups successfully added: "example-rg,sales-rac-proxy-svr-shadow-rg". pgyruss1 (root) # clrg set -p RG_affinities=+++sales-rac-proxy-svr-shadow-rg \
example-rg
vzpyrus3b (root) # clrg set -p RG_affinities=+++sales-rac-proxy-svr-shadow-rg \
example-rg
vzpyrus3b (root) # geopg list Protection Group: oracle-pg Partnership name : migration-ps Local Role : Primary Deployed on clusters : oracle-zc, gyruss Data Replication type : ODG Configuration status : OK Synchronization status : OK Creation signature : gyruss Jan 20, 2012 6:20:36 AM PST Last update : Jan 24, 2012 2:44:29 AM GMT-08:00 Local properties : None Global properties : Description : Timeout : 3600 seconds RoleChange_ActionCmd : External_Dependency_Allowed : false RoleChange_ActionArgs : Protected resource groups: example-rg sales-rac-proxy-svr-shadow-rg ODG Oracle Data Guard configurations: sales sales_remote_database_name : sales sales_local_database_name : salesdr sales_local_rac_proxy_svr_rg_name : oracle-rg sales_remote_rac_proxy_svr_rg_name : oracle-rg sales_sysdba_password : ******** sales_replication_mode : maxperformance sales_remote_db_service_name : sales_geoadm sales_local_db_service_name : salesdr_geoadm sales_standby_type : physical sales_sysdba_username :

Listing 10. Using a Shadow Resource Group

Now, restart the replication. Upon completion, the replication status is reflected in both the geoadm output and the status of the sales-odg-rep-rs resource, a shown in Listing 11.

vzpyrus3b (root) # geopg start --scope global oracle-pg
Processing operation... The timeout period for this operation on each cluster is 3600 seconds (3600000 milliseconds)...
Protection group "oracle-pg" successfully started.
vzpyrus3b (root) # geoadm status

   Cluster:  oracle-zc

   Partnership "migration-ps"	: OK
	Partner clusters	: gyruss
	Synchronization		: OK
	ICRM Connection		: OK

	Heartbeat "hb_oracle-zc~gyruss" monitoring "gyruss": OK
	     Plug-in "ping_plugin"  	: Inactive
	     Plug-in "tcp_udp_plugin"	: OK

   Protection group "oracle-pg"	: OK
	Partnership		: migration-ps
	Synchronization		: OK

	Cluster oracle-zc	: OK
	     Role		: Secondary
	     Activation state	: Activated
	     Configuration	 OK
	     Data replication	: OK
	     Resource groups	: OK

	Cluster gyruss		: OK
	     Role		: Primary
	     Activation state	: Activated
	     Configuration	: OK
	     Data replication	: OK
	     Resource groups	: OK

vzpyrus3b (root) # clrs status sales-odg-rep-rs 

=== Cluster Resources ===

Resource Name         Node Name     State       Status Message
-------------         ---------     -----       --------------
sales-odg-rep-rs      vzpyrus3a     Online      Online - Replicating in MaxPerformance mode
                      vzpyrus3b     Offline     Offline

Listing 11. Restarting the Replication

Note that the status of the application resource groups is different on both clusters depending upon which cluster is the primary cluster, as shown in Listing 12.

vzpyrus3b (root) # clrg status sales-rac-proxy-svr-shadow-rg example-rg

=== Cluster Resource Groups ===

Group Name                      Node Name   Suspended   Status
----------                      ---------   ---------   ------
sales-rac-proxy-svr-shadow-rg   vzpyrus3a   No          Unmanaged
                                vzpyrus3b   No          Unmanaged

example-rg                      vzpyrus3a   No          Unmanaged
                                vzpyrus3b   No          Unmanaged

pgyruss1 (root) # clrg status sales-rac-proxy-svr-shadow-rg example-rg

=== Cluster Resource Groups ===

Group Name                      Node Name   Suspended   Status
----------                      ---------   ---------   ------
sales-rac-proxy-svr-shadow-rg   pgyruss1    No          Offline
                                pgyruss2    No          Online

example-rg                      pgyruss1    No          Offline
                                pgyruss2    No          Online

Listing 12. Checking the Status of the Application Resource Groups

The final step in the upgrade process is to switch over to the Oracle Solaris 11 cluster, as shown in Listing 13. You need to choose the right time to perform this operation on your site to minimize any potential outage.

vzpyrus3b (root) # geoadm status

   Cluster:  oracle-zc

   Partnership "migration-ps"	: OK
	Partner clusters	: gyruss
	Synchronization		: OK
	ICRM Connection		: OK

	Heartbeat "hb_oracle-zc~gyruss" monitoring "gyruss": OK
	     Plug-in "ping_plugin"  	: Inactive
	     Plug-in "tcp_udp_plugin"	: OK

   Protection group "oracle-pg"	: OK
	Partnership		: migration-ps
	Synchronization		: OK

	Cluster oracle-zc	: OK
	     Role		: Secondary
	     Activation state	: Activated
	     Configuration	: OK
	     Data replication	: OK
	     Resource groups	: OK

	Cluster gyruss		: OK
	     Role		: Primary
	     Activation state	: Activated
	     Configuration	: OK
	     Data replication	: OK
	     Resource groups	: OK
vzpyrus3b (root) # geopg switchover -m oracle-zc oracle-pg
Are you sure you want to switchover protection group 'oracle-pg' to primary 
cluster 'oracle-zc'? (yes|no) > yes
Processing operation... The timeout period for this operation on each cluster
is 3600 seconds (3600000 milliseconds)...

"Switchover" operation succeeded for the protection group "oracle-pg".
vzpyrus3b (root) # geoadm status

   Cluster:  oracle-zc

   Partnership "migration-ps"	: OK
	Partner clusters	: gyruss
	Synchronization		: OK
	ICRM Connection		: OK

	Heartbeat "hb_oracle-zc~gyruss" monitoring "gyruss": OK
	     Plug-in "ping_plugin"  	: Inactive
	     Plug-in "tcp_udp_plugin"	: OK

   Protection group "oracle-pg"	: OK
	Partnership		: migration-ps
	Synchronization		: OK

	Cluster oracle-zc	: OK
	     Role		: Primary
	     Activation state	: Activated
	     Configuration	: OK
	     Data replication	: OK
	     Resource groups	: OK

	Cluster gyruss		: OK
	     Role		: Secondary
	     Activation state	: Activated
	     Configuration	: OK
	     Data replication	: OK
	     Resource groups	: OK

vzpyrus3b (root) # clrs status sales-odg-rep-rs sales-oracle-svr-shadow-rs

=== Cluster Resources ===

Resource Name                Node Name   State     Status Message
-------------                ---------   -----     --------------
sales-odg-rep-rs             vzpyrus3a   Online    Online - Replicating in MaxPerformance mode
                             vzpyrus3b   Offline   Offline

sales-oracle-svr-shadow-rs   vzpyrus3a   Online    Online -  (Data Guard Primary)
                             vzpyrus3b   Offline   Offline

Listing 13. Switching Over to the Oracle Solaris 11 Cluster

As part of the switchover process, Oracle Solaris Cluster Geographic Edition changes the Dataguard_role property values of the oracle-svr-rs resource. Check that this change has indeed happened, as shown in Listing 14.

vzpyrus3b (root) # clrs show -p dataguard_role oracle-svr-rs

=== Resources ===                              

Resource:                                       oracle-svr-rs

  --- Standard and extension properties ---    

  Dataguard_role:                               PRIMARY
    Class:                                         extension
    Description:                                   This indicates whether the
instance is dataguard or not. If the instance is a dataguard instance, then 
this property indicates whether the instance is a primary, standby, or 
whether a dataguard role switchover is in progress.
    Per-node:                                      False
    Type:                                          enum

pgyruss1 (root) # clrs show -p dataguard_role oracle-svr-rs

=== Resources ===                              

Resource:                                       oracle-svr-rs

  --- Standard and extension properties ---    

  Dataguard_role:                               STANDBY
    Class:                                         extension
    Description:                                   This indicates whether the
instance is dataguard or not. If the instance is a dataguard instance, then 
this property indicates whether the instance is a primary, standby, or 
whether a dataguard role switchover is in progress.
    Per-node:                                      False
    Type:                                          enum

Listing 14. Checking the Property Values

If for any reason, you need to revert to the Oracle Solaris 10 cluster, you can simply issue another switchover command.

Revision 1.0, 05/11/2012

Follow us on Facebook, Twitter, or Oracle Blogs.