Oracle9i Real Application Clusters Database Quick Start Install Guide for Sun ZFS Storage Appliance Using Oracle Linux

March 2011

By Sridhar Ranganathan and Jeffrey Wright

This quick start document provides step-by-step instructions on how to install and configure the Oracle Real Application Clusters (Oracle RAC) Release 9.2.0.8 database using the Sun ZFS Storage Appliance with NFSv3 protocol. Oracle provides cluster software for this version of Oracle RAC for systems running the Linux or Microsoft Windows operating systems. This document describes an implementation on a Linux system, with a focus on Oracle Linux 4U5 or Red Hat Enterprise Linux 4U5.

Contents:

Description of the Tested Configuration


Details for the tested configuration described in this paper are shown in the table below.

Description

Item

Oracle RAC server nodes

Two Sun Fire 4200 servers from Oracle (with host names isv-4200x and isv-4200y)

NAS storage

Sun Storage 7310 cluster from Oracle (with head node host names isv-7310a-h1 and isv-7310a-h2)

Network

Gigabit Ethernet infrastructure

Nodes interconnect (eth3): single GigE connectivity

Server operating system

Oracle Linux 4U5

Red Hat Enterprise Linux 4U5

Cluster software

Oracle 9.2.0.4 and Oracle 9.2.0.8

Oracle database software

Oracle 9.2.0.4 and Oracle 9.2.0.8


Note: For instructions on how to install Oracle9i RAC on Linux, refer to Note 184821.1, Step-By-Step Installation of 9.2.0.5 RAC on Linux, in the My Oracle Support knowledge base.

Installing and Configuring Oracle RAC Nodes Running Oracle Linux


Unless specified, complete the following steps on each Oracle RAC server node.

1. Download the ISO images listed below from the Oracle Linux site:

Enterprise-R4-U5-x86_64-disc1.iso
Enterprise-R4-U5-x86_64-disc2.iso
Enterprise-R4-U5-x86_64-disc3.iso
Enterprise-R4-U5-x86_64-disc4.iso

2. Install the images from the Oracle Integrated Lights Out Manager (Oracle ILOM) on the server using Java Console to redirect any system output to the console window:

a. Mount the disc1 image.

b. During installation, provide appropriate information, such as the server host name, domain, and NIS information.

c. Choose default installation.

3. After the installation is complete, configure yum:

a. Install yum from the distribution CD.

#yum install python-urlgrabber-2.9.8-0.3.el4.noarch.rpm

b. Update the yum configuration file.

#cd /etc/yum.repos.d
#vi public-yum-el4.repo
[el4_u6_base]
name=Enterprise Linux $releasever U6 - $basearch - base
baseurl= http://public-yum.oracle.com/repo/EnterpriseLinux \
   /EL4/6/base/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-el4
gpgcheck=1
enabled=1

4. Install the RPM packages:

#yum install rsh-server
#yum install gcc
#yum install binutils
#yum install xauth
#yum install glibc-devel
#yum install compat-gcc-32
#yum install compat-gcc-32-c++

5. Update gcc to prevent failures during the Oracle link process:

#cd /usr/bin
#mv gcc gcc_old
#cp gcc32 gcc

6. Configure networking:

a. Configure the private interconnect (0.3.3.1 on node isv-4200x, 10.3.3.2 on node isv-4200y).

#ifconfig eth3 10.3.3.1 netmask 255.255.255.0 broadcast 10.3.3.255
#ifconfig eth3 up

b. Add entries in /etc/hosts.

[oracle@isv-4200x 9.2.0]$ cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1		localhost.localdomain localhost
172.20.67.65		isv-4200x.central.sun.com isv-4200x
#
# remote node
172.20.67.111		isv-4200y.central.sun.com	isv-4200y
# RAC information
10.3.3.1	isv-4200x-priv
10.3.3.2	isv-4200y-priv

7. Add the dba group and Oracle user:

# groupadd dba -g 501
# mkdir -p /export/home/oracle
# useradd -c "Oracle User" -G dba -u 101 -m \
     -d /export/home/oracle -s /bin/csh oracle

8. Set up the Oracle user profile:

[oracle@isv-4200x ~]$ cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi
# User specific environment and startup programs
LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
ORACLE_SID=bench1
ORACLE_BASE=/oracle/products; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/9.2.0; export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32; \
    export LD_LIBRARY_PATH
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin; export PATH
unset USERNAME

9. Set up rsh.

Note: Oracle 9i RAC doesn't support ssh (Secure Shell).

a. Set up .rhosts (under /home/oracle) and /etc/hosts.equiv with the following:

isv-4200x
isv-4200y
isv-4200x-priv
isv-4200y-priv

b. On each node, under the /etc/xinet.d directory, edit rsh, rlogin, rexec, klogin, eklogin, and so on. Set disable = no.

c. Restart the inetd daemon.

# /etc/init.d/xinetd restart

d. Disable iptables on each node. Otherwise, rsh fails.

# service iptables save
# service iptables stop
# chkconfig iptables off

e. Test running a command on the second node from both the public and the private interface on the first node.

10. Configure hangcheck-timer.

Note: In Oracle RAC 9.2.0.4 and newer releases, the kernel-based hangcheck-timer is used instead of a watchdog timer.

a. Update /etc/modprobe.conf by adding this line (which is used for future reboots):

options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

b. Run hangcheck-timer manually the first time by entering the following:

# modprobe hangcheck-timer hangcheck_tick=1 hangcheck_margin=180 \
     hangcheck_reboot=1

c. Verify the module is loaded by entering the following:

# lsmod | grep hang

The infrastructure updates required to install and configure Oracle RAC are now complete.

Configuring the Sun ZFS Storage Appliance


To configure the Sun ZFS Storage Appliance, complete the following steps using either the command line interface (CLI) or the graphical user interface (GUI).

1. Configure the storage pool with mirrored protection. Configure the log profile as striped protection for performance-optimized systems and as mirrored protection for reliability-optimized systems.

2. Create an oracle9irac project.

3. On the Generic tab for the oracle9irac project, disable update read and set the default mount point to /export/oracle9irac.

4. On the Protocol tab for the oracile9irac project, add NFS exceptions for root access on the Oracle RAC database server nodes (isv-4200x and isv-4200y).

5. Create the following shares for the oracle9irac project:

Share Name

Record Size

Mount point in
Sun ZFS Storage Appliance

Data Stored

quorum

8k

/export/oracle9irac/quorum

Quorum and shared configuration file required by Oracle cluster manager

oradata

8k

/export/oracle9irac/oradata

Data files

oralogs

128k

/export/oracle9irac/oralogs

Online logs and control files

oraarchive

128k

/export/oracle9irac/oraarchive

Archived logs

oraclehome_4200x

128k

/export/oracle9irac/oraclehome_4200x

$ORACLE_HOME for the node
isv-4200x (required only if the binaries are stored on the NAS)

oraclehome_4200y

128k

/export/oracle9irac/oraclehome_4200y

$ORACLE_HOME for the node
isv-4200y (required only if the binaries are stored on the NAS)


Installing the Oracle RAC 9.2.0.4 Database


To install the Oracle RAC 9.2.0.4 database, complete these steps.

1. Add entries to /etc/fstab and mount the NFS shares using optimal mount options for Oracle database files:

# cat /etc/fstab | grep ora
isv-7310a-h1:/export/oracle9irac/oraclehome_4200x \
   /oracle/products/9.2.0     nfs     rw,bg,hard,nointr,tcp,\
   vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noacl,suid 0 0
isv-7310a-h1:/export/oracle9irac/quorum  /oradata/cluster     nfs
   rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,\
   wsize=32768,actimeo=0,noacl  0 0
isv-7310a-h1:/export/oracle9irac/oradata  /oradata/data     nfs
   rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,\
   wsize=32768,actimeo=0,noacl  0 0
isv-7310a-h1:/export/oracle9irac/oralog  /oradata/logs     nfs
   rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,\
   wsize=32768,actimeo=0,noacl  0 0
isv-7310a-h1:/export/oracle9irac/oraarchive  /oradata/archive     nfs
   rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,\
   wsize=32768,actimeo=0,noacl  0 0

2. Create the mount points and mount the shares:

# mount /oradata/data
# mount /oradata/logs
# mount /oradata/archive
# mount /oradata/cluster
# mount /oracle/products/9.2.0

3. Create a quorum file and shared configuration file:

$ dd if=/dev/zero of=/oradata/cluster/quorum.dbf bs=4096 count=300
$ touch /oradata/cluster/SharedConfig.dbf

4. Change the ownership of the directories /oradata and /oracle to oracle:dba.

5. To install the Oracle RAC 9.2.0.8 infrastructure, patch the 9.2.0.4 version of the cluster manager and Oracle database to version 9.2.0.8:

a. As the user oracle, install the cluster manager from the 9.2.0.4 disk1.

$ ./runInstaller

b. Choose the Cluster Manager option:

$ORACLE_HOME will be populated as /oracle/products/9.2.0. The installation process installs the Oracle cluster manager files in the directory $ORACLE_HOME/oracm. It also populates $ORACLE_HOME/oracm/admin/cmcfg.ora on both nodes.

c. After installation, exit the GUI. To view the files that have been installed, do the following:

$cd $ORACLE_HOME
$ ls
bin inventory lib oracm quorum root.sh
$ cd oracm
$ ls
admin bin log
$ cd admin
$ ls
cmcfg.ora.sbs cmcfg.ora.tmp cmcfg.sbs cmcfg.sh ocmargs.ora

d. From the first node, start the Oracle cluster manager as root:

$ su -
# export ORACLE_HOME=/oracle/products/9.2.0
# cd $ORACLE_HOME/oracm/bin
# ./ocmstart.sh

e. Monitor $ORACLE_HOME/oracm/log/cm.log. To check that the Oracle cluster manager (oracm) is running, enter the following:

ps -f | grep oracm

Note: Do not start the Oracle cluster manager from the second node. If you start ocmstart.sh from the second node, the oracm processes on both nodes may crash.

f. Restart the installation GUI from the Oracle RAC 9.2.0.4 runInstaller. Choose the option Real Application Cluster.

The GUI should display the list of nodes. If the cluster is not installed properly, the node list will not appear. In that case, abort the installation and verify that the oracm processes are running. If they are not, repeat the ocmstart process (starting at step 5d above) and then perform the installation.

The Oracle RAC 9.2.0.4 cluster manager and RAC database binaries are now installed.

Upgrading the Oracle RAC 9.2.0.4 Infrastructure to Oracle RAC 9.2.0.8


To upgrade the Oracle RAC infrastructure from version 9.2.0.4 to 9.2.0.8, do the following.

1. Download patch set number 4547809, which is available at My Oracle Support.

2. Stop the oracm processes by entering the following:

# pkill oracm

3. Patch the cluster manager first.

Note: Use runInstaller under oracm directory, not runInstaller under the disk1 directory.

$ cd patch(9.2.0.8)/disk1/oracm/runInstaller

4. After patching the cluster manager, enter the following:

$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -updateNodeList -noClusterEnabled -local \
     ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=isv-4200x,isv-4200y

5. Upgrade Oracle RAC by entering the following:

$ORACLE_HOME/oui/bin/runInstaller

6. Update the ocmstart.sh script to avoid an oracm crash when oracm is started on the second node. For more details, refer to Note 556042.1: Oracm Crash On 1st Node When Try to Start it Up on 2nd Node in the My Oracle Support knowledge base.

#cd $ORACLE_HOME/oracm/bin

7. Edit ocmstart.sh on both nodes and update the condition redhat-release so that LD_LIBRARY_PATH is set to 2.4.19:

if [ -e /etc/redhat-release ]
then
version=`rpm -qf /etc/redhat-release --queryformat "%{VERSION}\n"`
  LD_ASSUME_KERNEL=2.4.19
  export LD_ASSUME_KERNEL
fi

8. Ensure KernelModuleName is set to hangcheck-timer in the cmcfg.ora file on both nodes:

# cat $ORACLE_HOME/oracm/admin/cmcfg.ora
    ClusterName=Oracle Cluster Manager, version 9i
MissCount=300
PrivateNodeNames=isv-4200x-priv isv-4200y-priv
PublicNodeNames=isv-4200x isv-4200y
ServicePort=9998
CmDiskFile=/oradata/cluster/quorum.dbf
HostName=isv-4200x-priv
KernelModuleName=hangcheck-timer
PollInterval=1000

9. As the root user, start oracm on both nodes from $ORACLE_HOME/oracm/bin/ocmstart.sh:

$ ps -ef | grep oracm
root     24917     1  0 Jun09 ?        00:00:00 oracm
root     24919 24917  0 Jun09 ?        00:00:00 oracm
root     24920 24919  0 Jun09 ?        00:00:00 oracm
root     24921 24919  0 Jun09 ?        00:00:00 oracm
root     24922 24919  0 Jun09 ?        00:00:02 oracm
root     24923 24919  0 Jun09 ?        00:00:00 oracm
root     24924 24919  0 Jun09 ?        00:00:00 oracm
root     24925 24919  0 Jun09 ?        00:00:00 oracm
root     24926 24919  0 Jun09 ?        00:00:00 oracm

The database and cluster manager are now updated to version 9.2.0.8.

10. Enter the following to verify the current release version:

[oracle@isv-4200y ~]$ sqlplus

You should see output similar to the following:

SQL*Plus: Release 9.2.0.8.0 - Production on Wed Jun 10 04:38:23 2009
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Creating the Database


1. Prior to creating the database, start gsdctl by entering the following:

$ gsdctl start
$ cat /var/opt/oracle/srvConfig.loc
srvconfig_loc=/oradata/cluster/SharedConfig.dbf

$chmod 777 /var/opt/oracle/srvConfig.loc
$chown oracle:dba /var/opt/oracle/srvConfig.loc

$ ./gsdctl start
Successfully started GSD on local node

Note: If gsdctl fails to start and the error ERROR: PRKR - 1064 is displayed, refer to Note 212631.1, Resolving PRKR-1064 in a RAC Environment, in the My Oracle Support knowledge base for a resolution.

2. If the Cluster Configuration Assistant is unable to configure and start up Oracle gsd on the cluster, stop and cancel all remaining configuration tools and exit from the Oracle Universal Installer (oui). Use the following command to manually configure gsd from one node:

$ srvconfig -init

3.Create the database using dbca.

$ dbca

Oracle Database Configuration Files


The content of the initialization parameter file init on isv-4200x is listed below:

control_files = ( "/oradata/logs/control0.ctl", \
   "/oradata/logs/control1.ctl" )

background_dump_dest = ?/admin/bench/bdump
core_dump_dest = ?/admin/bench/udump
user_dump_dest = ?/admin/bench/cdump
#diagnostic_dest=/oradata/alerts

log_checkpoint_interval = 10000
log_checkpoint_timeout = 1800
log_archive_start = true
log_archive_dest_1 = "location=/oradata/archive/bench1/"
log_archive_format = %t_%s_%r.arc

db_cache_size = 256M
db_recycle_cache_size = 64M
db_keep_cache_size = 500M

log_buffer = 104857600
shared_pool_size = 256M
large_pool_size = 8M
java_pool_size = 8M

undo_management = auto

db_name = bench
instance_name = bench1
service_names = bench1
db_files = 200
db_file_multiblock_read_count = 128

processes=400

nls_date_format = "MM/DD/YYYY HH24:MI:SS"
db_block_size = 8192

	filesystemio_options =  SETALL

# RAC related
cluster_database=true
cluster_database_instances=2


bench1.instance_number=1
bench2.instance_number=2
bench2.thread=2
bench1.thread=1
bench1.undo_tablespace=UNDOTBS1
bench2.undo_tablespace=UNDOTBS2

compatible=9.2.0.0.0

The content of the listener configuration file listener.ora on isv-4200x is shown below:

$ cat $ORACLE_HOME/network/admin/listener.ora
# $Id: listener.ora,v 1.3 2003/05/29 15:36:18 oracle Exp $
# $Header: /home/jw189658/cvsroot/orabench/env/listener.ora,\
   v 1.3 2003/05/29 15:36:18 oracle Exp $
# $Date: 2003/05/29 15:36:18 $

LISTENER_ISV-4200X =
 (DESCRIPTION_LIST =
   (DESCRIPTION =
   (ADDRESS_LIST =
   (ADDRESS = (PROTOCOL = TCP)(HOST = isv-4200x)(PORT = 1521)\
      (IP = FIRST))
   )
     (ADDRESS_LIST =
   (ADDRESS = (PROTOCOL = TCP)(HOST = 172.20.67.65)(PORT = 1521)\
      (IP = FIRST))
   )
     (ADDRESS_LIST =
   (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
   )
   )
 )

SID_LIST_LISTENER =
 (SID_LIST =
   (SID_DESC =
   (SID_NAME = PLSExtProc)
   (ORACLE_HOME = /oracle/products/9.2.0)
   (PROGRAM = extproc)
   )
   (SID_DESC =
   (GLOBAL_DBNAME = bench)
   (ORACLE_HOME = /oracle/products/9.2.0)
   (SID_NAME = bench1)
   )
 )


TRACE FILE_LISTENER = listener.trc
STARTUP_WAIT_TIME_LISTENER = 0
LOG_FILE_LISTENER = listener.log
TRACE_LEVEL_LISTENER = OFF

The content of the net service configuration file tnsnames.ora is shown below:

$ cat tnsnames.ora
# $Id: tnsnames.ora,v 1.4 2003/05/29 15:36:18 oracle Exp $
# $Header: /home/jw189658/cvsroot/orabench/env/tnsnames.ora,\
   v 1.4 2003/05/29 15:36:18 oracle Exp $
# $Date: 2003/05/29 15:36:18 $

BENCH =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = isv-4200x)(PORT = 1521))
     (ADDRESS = (PROTOCOL = TCP)(HOST = isv-4200y)(PORT = 1521))
     (LOAD_BALANCE = on)
     (FAILOVER = on)
   )
   (CONNECT_DATA =
     (SERVICE_NAME = bench)
     (FAILOVER_MODE =
       (TYPE = session)
       (METHOD = basic)
     )
   )
 )

BENCH1 =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = isv-4200x)(PORT = 1521))
   )
   (CONNECT_DATA =
     (SERVICE_NAME = BENCH)
   )
 )

BENCH2 =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = isv-4200y)(PORT = 1521))
   )
   (CONNECT_DATA =
     (SERVICE_NAME = BENCH)
   )
 )

Additional Reference and Troubleshooting Information


Before executing the ocmstart.sh command, the cluster manager must be started by the root user and ORACLE_HOME must be populated, as follows:

# cat startcm.sh
export ORACLE_HOME=/oracle/products/9.2.0
/bin/rm $ORACLE_HOME/oracm/log/*.ts
$ORACLE_HOME/oracm/bin/ocmstart.sh
Please wait ...
sleep 5
ps -ef | grep oracm

[root@isv-4200x ~]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      68378168  14836576  50068184  23% /
/dev/sda1               101086     15025     80842  16% /boot
none                   1027668         0   1027668   0% /dev/shm
isv-7310a-h1:/export/oracle9irac/oraclehome_4200x
                    4780554816   4516128 4776038688   1% /oracle/products/9.2.0
isv-7310a-h1:/export/oracle9irac/quorum
                    4776041984      3296 4776038688   1% /oradata/cluster
isv-7310a-h1:/export/oracle9irac/oradata
                    4828547264  52508576 4776038688   2% /oradata/data
isv-7310a-h1:/export/oracle9irac/oralog
                    4778163808   2125120 4776038688   1% /oradata/logs
isv-7310a-h1:/export/oracle9irac/oraarchive
                    4776038720        32 4776038688   1% /oradata/archive
Revision 1.0, 03/09/2011