What You See Is What You Get Element

Configuring the Oracle Database for Clustering

Part II of How to Upgrade to Oracle Solaris Cluster 4.0

by Tim Read

Published May 2012

Part I - Overview of the Example Configuration
Part II - Configuring the Oracle Database for Clustering
Part III - Installing the Target Cluster
Part IV - Creating the Zone Cluster
Part V - Installing the New Application Software Stack
Part VI - Creating the Standby Database
Part VII - Creating the Oracle Solaris Cluster Geographic Edition Configuration
Part VIII - How Oracle Solaris Cluster Geographic Edition Simplifies the Upgrade Process

Note: This article shows how to configure Oracle Database in the source cluster prior to upgrading to Oracle Solaris 11 and the Oracle Solaris Cluster 4.0 software. If you already have Oracle Database configured in your source cluster, you can skip ahead to Part III of this series.

On our Oracle Solaris 10 cluster, we have installed the Oracle Database 11g Release 2 software on the local disk of each of our existing servers, with ORACLE_HOME set to /u02/app/oracle/product/11.2.0/dbhome_1. We will use the same ORACLE_HOME setting for the new Oracle Solaris 11 clusters.

Caution: If you choose to use this article as a guide for performing a similar process, you need to pay close attention to the nodes on which the individual commands are run. For that reason, the system prompts shown in the example steps include both the node name and the user name to indicate both where, and as whom, a command must be run.

The data is held in a zpool named orapool that is mounted on /oradata. On the gyruss cluster, the zpool consists of four disks from a single storage array, as shown in Listing 1. (Cluster gyruss consists of nodes pgyruss1 and pgyruss2, and cluster pyrus consists of nodes ppyrus1 and ppyrus2, as shown in Figure 1 in Part I of this series.)

pgyruss1 (root) # zpool history orapool 
History for 'orapool':
2011-07-29.01:19:56 zpool create -f orapool 
/dev/did/dsk/d1s0 /dev/did/dsk/d2s0 /dev/did/dsk/d3s0 
/dev/did/dsk/d4s0
2011-07-29.01:21:22 zfs set mountpoint=/oradata orapool
pgyruss1 (root) #
pgyruss1 (root) # zpool status orapool
  pool: orapool
 state: ONLINE
 scan: scrub repaired 0 in 0h5m with 0 errors on Thu Dec  1 07:16:25 2011
config:

        NAME                                       STATE     READ WRITE CKSUM
        orapool                                    ONLINE       0     0     0
          c4t600A0B80004823740000A57F4E2EA38Ad0s0  ONLINE       0     0     0
          c4t600A0B80004823740000A5804E2EA3B8d0s0  ONLINE       0     0     0
          c4t600A0B80004823820000D2214E2EA483d0s0  ONLINE       0     0     0
          c4t600A0B80004823740000A5814E2EA3E6d0s0  ONLINE       0     0     0

errors: No known data errors

Listing 1. Four Disks in the orapool Zpool

If you set up an equivalent production cluster, use zpool mirroring across storage arrays to achieve maximum data protection from the features inherent to zpools and ZFS.

Configure the Oracle listener and assign a static listener in preparation for the Data Guard feature's replication process. The resulting sqlnet.ora, listener.ora, and tnsnames.ora files appear as shown in Listing 2.

pgyruss1 (oracle) $ cat sqlnet.ora
NAMES.DIRECTORY_PATH= (TNSNAMES)

WALLET_LOCATION =
        (SOURCE =
                (METHOD = FILE)
                (METHOD_DATA =
                        (DIRECTORY = /oradata/wallet)
                )
        )

SQLNET.WALLET_OVERRIDE = TRUE

ADR_BASE = /u02/app/oracle
pgyruss1 (oracle) $
pgyruss1 (oracle) $ cat listener.ora
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = sales_DGMGRL)
      (ORACLE_HOME = /u02/app/oracle/product/11.2.0/dbhome_1)
      (SID_NAME = sales)
    )
  )

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = vzgyruss1b)(PORT = 1521))
    )
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

ADR_BASE_LISTENER = /u02/app/oracle

pgyruss1 (oracle) $ cat tnsnames.ora
SALES =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vzgyruss1b)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = sales)
    )
  )

SALES_GEOADM =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vzgyruss1b)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = sales)
    )
  )

LISTENER_SALES =
  (ADDRESS = (PROTOCOL = TCP)(HOST = vzgyruss1b)(PORT = 1521))

SALESDR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vzpyrus1a)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = salesdr)
    )
  )

SALESDR_GEOADM =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vzpyrus1a)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = salesdr)
    )
  )

LISTENER_SALESDR =
  (ADDRESS = (PROTOCOL = TCP)(HOST = vzpyrus1a)(PORT = 1521))

Listing 2. Configuring the Oracle Listener

The reason for using the Transparent Network Substrate (TNS) names with the _GEOADM suffix will become apparent in Part VII of this series when you create the Oracle Solaris Cluster Geographic Edition configuration.

Copy the tnsnames.ora and sqlnet.ora files to the other Oracle Solaris 10 cluster node, as follows:

pgyruss1 (oracle) $ scp tnsnames.ora sqlnet.ora oracle@pgyruss2:`pwd`
Password: 
tnsnames.ora         100% |****************************************|  1044       00:00    
sqlnet.ora           100% |****************************************|   348 00:00 

Configure Oracle Wallet Manager so that its files reside on shared storage.

Note: This configuration means that wallet-type access (that is, using a /@SID connect string) is available only from the node on which the /oradata file system is mounted. If you want to have wallet-type access available from all nodes, you must either maintain local copies of the wallet information or place the wallet files on a globally mounted file system instead.

The example in Listing 3 shows how to create wallet credentials for a user named hamon for an Oracle service called sales that will be used by the Oracle Solaris Cluster agent probe.

pgyruss1 (oracle) $ export ORACLE_HOME=/u02/app/oracle/product/11.2.0/dbhome_1
pgyruss1 (oracle) $ export PATH=$PATH:${ORACLE_HOME}/bin
pgyruss1 (oracle) $ mkstore -wrl /oradata/wallet -create
Oracle Secret Store Tool : Version 11.2.0.3.0 - Production
Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved.

Enter password:            
   
Enter password again:           
   
pgyruss1 (oracle) $ mkstore -wrl /oradata/wallet -createCredential sales hamon
Oracle Secret Store Tool : Version 11.2.0.3.0 - Production
Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line 
Enter your secret/Password:        
   
Re-enter your secret/Password:        
   
Enter wallet password:            
   
Create credential oracle.security.client.connect_string1

Listing 3. Creating Wallet Credentials

Build a simple Oracle database using the Oracle Database Configuration Assistant (DBCA). Select the Use Oracle-Managed Files option for your data files and set the base directory being to /oradata.

When the database is running, create an Oracle Database user account, as shown in Listing 4, so the health of the database can be checked by the Oracle Solaris Cluster software's Oracle Database probe using the credentials stored in the Oracle Wallet Manager.

pgyruss1 (oracle) $ sqlplus /NOLOG

SQL*Plus: Release 11.2.0.3.0 Production on Tue Jan 17 09:34:57 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

SQL> connect sys/oracle as sysdba
Connected.

SQL> create user hamon identified by hamon;
User created.

SQL> alter user hamon default tablespace system quota 1m on system;
User altered.

SQL> grant select on v_$sysstat to hamon;
Grant succeeded.

SQL> grant select on v_$archive_dest to hamon;
Grant succeeded.

SQL> grant select on v_$database to hamon;
Grant succeeded.

SQL> grant create session to hamon;
Grant succeeded.

SQL> grant create table to hamon;
Grant succeeded.

Listing 4. Creating an Oracle Database User

In order to prepare for the data replication later, ensure that database logging and flashback are enabled and that a suitable number of standby redo logs are configured, as shown in Listing 5.

pgyruss1 (oracle) $ sqlplus /NOLOG

SQL*Plus: Release 11.2.0.3.0 Production on Tue Jan 17 09:34:57 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

SQL> connect sys/oracle as sysdba
Connected.
SQL> alter database force logging;

Database altered.

SQL> alter system set log_archive_dest_1='location=use_db_recovery_file_dest  \ 
valid_for=(all_logfiles,all_roles) db_unique_name=sales' scope=both;
System altered. SQL> alter system set standby_file_management='AUTO' scope=both; System altered. SQL> alter database flashback on; Database altered. SQL> alter database add standby logfile size 50m; Database altered. SQL> / Database altered. SQL> / Database altered. SQL> quit

Listing 5. Preparation for Database Replication

Create an Oracle Solaris Cluster resource group (oracle-rg) to control the zpool (orapool) and the logical host name (vzgyruss1b) resource through which clients connect to the database, as shown in Listing 6.

pgyruss1 (root) # clrg create oracle-rg
pgyruss1 (root) # clrt register SUNW.HAStoragePlus
pgyruss1 (root) # clrslh create -g oracle-rg -h vzgyruss1b oracle-lh-rs
pgyruss1 (root) # clrs create -g oracle-rg -t SUNW.HAStoragePlus \ <br>  
-p zpools=orapool oracle-hasp-rs
pgyruss1 (root) # clrt register SUNW.oracle_listener pgyruss1 (root) # clrs create -g oracle-rg -t SUNW.oracle_listener \
-p resource_dependencies=oracle-lh-rs \
-p ORACLE_HOME=/u02/app/oracle/product/11.2.0/dbhome_1 oracle-lsnr-rs
pgyruss1 (root) # clrt register SUNW.oracle_server pgyruss1 (root) # clrs create -g oracle-rg -t SUNW.oracle_server \
-p ORACLE_HOME=/u02/app/oracle/product/11.2.0/dbhome_1 \
-p ORACLE_SID=sales -p connect_string=/@sales \
-p alert_log_file=/u02/app/oracle/diag/rdbms/sales/sales/trace/alert_sales.log \
-p resource_dependencies_offline_restart=oracle-hasp-rs,oracle-lsnr-rs oracle-svr-rs
pgyruss1 (root) # clrg online -emM oracle-rg pgyruss1 (root) # clrs status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- oracle-svr-rs pgyruss1 Online Online pgyruss2 Offline Offline oracle-lsnr-rs pgyruss1 Online Online pgyruss2 Offline Offline oracle-hasp-rs pgyruss1 Online Online pgyruss2 Offline Offline oracle-lh-rs pgyruss1 Online Online - LogicalHostname online. pgyruss2 Offline Offline - LogicalHostname offline.

Listing 6. Creating a Resource Group

After creating the resource groups and resources, propagate the local configuration files and directories, such as the following, to the second cluster node:

  • $ORACLE_HOME/dbs/orapwsales
  • $ORACLE_HOME/dbs/spfilesales.ora
  • $ORACLE_HOME/network/admin/listener.ora
  • $ORACLE_HOME/network/admin/tnsnames.ora
  • $ORACLE_HOME/network/admin/sqlnet.ora
  • /u02/app/oracle/admin/sales/adump
  • /u02/app/oracle/diag

Rather than keeping separate copies, you might decide that you want to have a single central location for these files and directories. If so, you must move the files to the appropriate location on shared disk and create the appropriate symbolic links instead.

Note: Which approach you take is a matter of preference. A single copy will reduce the management overhead but, as a consequence, any error you make will be applicable to all nodes and, thus, might prevent your service from starting.

Then, as a final sanity check, test your configuration by switching the resource group between nodes to ensure that it was upgraded successfully:

pgyruss1 (root) # clrg switch -n pgyruss1 oracle-rg
pgyruss1 (root) # clrg switch -n pgyruss2 oracle-rg
Revision 1.0, 05/01/2012

Follow us on Facebook, Twitter, or Oracle Blogs.