<table>
  <tr> 
    <td>
                        
                           
DBA: Linux
                         
      
                        
Installing Oracle RAC 10 g Release 1 on Linux x86
by John Smiley
Learn the basics of installing Oracle RAC 10 g Release 1 on Red Hat Enterprise Linux or Novell SUSE Enterprise Linux, from the bare metal up (for evaluation purposes only)
Contents
Overview
Background
Part I: Install Linux
Part II: Configure Linux for Oracle
Part III: Prepare the Shared Disks
Part IV: Install Oracle RAC Software
Conclusion
<hr> Overview
This is the second in a series of guides that provide all the steps for installing the major components of Oracle Database 10 g software on Linux. All five of the certified English-language distributions of Linux are covered in detail (Asianux is not covered), and the guides assume that inexpensive Intel x86 hardware is being used. The guides walk you through the process of installation and configuration on commodity hardware for the purpose of evaluating the major Oracle 10 g products. The ultimate goal of this series is to help you install and configure all the components of an Oracle 10 g Grid.
This guide will take the approach of offering the easiest paths, with the fewest number of steps, for accomplishing a task. This approach often means making configuration choices that would be inappropriate for anything other than an evaluation. For that reason, this guide is not appropriate for building production-quality environments, nor does it reflect best practices.
The five Linux distributions certified for Oracle Database 10 g covered are: <ul> <li>Red Hat Enterprise Linux 4 (RHEL4) <li>Red Hat Enterprise Linux 3 (RHEL3) <li>Red Hat Enterprise Linux 2.1 (RHEL2.1) <li>Novell SUSE Linux Enterprise Server 9 </li> <li>Novell SUSE Linux Enterprise Server 8 </li> </ul> If you are new to Linux and/or Oracle, this guide is for you. It starts with the basics and walks you through an installation of Oracle Database 10 g from the bare metal up.
This guide is divided into four parts: Part I covers the installation of the Linux operating system, Part II covers configuring Linux for Oracle, Part III discusses the essentials of partitioning shared disk, and Part IV covers installation of the Oracle software.

Click here to see the Release 2 version of this guide. <hr> Background
The illustration below shows the major components of an Oracle RAC 10 g configuration. Nodes in the cluster are typically separate servers (hosts).

Hardware
At the hardware level, each node in a RAC cluster shares three things: <ol> <li>Access to shared disk storage <li>Connection to a private network <li>Access to a public network. </ol>
Shared Disk Storage
Oracle RAC relies on a shared disk architecture. The database files, online redo logs, and control files for the database must be accessible to each node in the cluster. The shared disks also store the Oracle Cluster Registry and Voting Disk (discussed later). There are a variety of ways to configure shared storage including direct attached disks (typically SCSI over copper or fiber), Storage Area Networks (SAN), and Network Attached Storage (NAS).
Private Network
Each cluster node is connected to all other nodes via a private high-speed network, also known as the cluster interconnect or high-speed interconnect (HSI). This network is used by Oracle's Cache Fusion technology to effectively combine the physical memory (RAM) in each host into a single cache. Oracle Cache Fusion allows data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network. It also preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.
The private network is typically built with Gigabit Ethernet, but for high-volume environments, many vendors offer proprietary low-latency, high-bandwidth solutions specifically designed for Oracle RAC. Linux also offers a means of bonding multiple physical NICs into a single virtual NIC (not covered here) to provide increased bandwidth and availability.
Public Network
To maintain high availability, each cluster node is assigned a virtual IP address (VIP). In the event of host failure, the failed node's IP address can be reassigned to a surviving node to allow applications to continue accessing the database through the same IP address.
Configuring the Cluster Hardware
There are many, many different ways to configure the hardware for an Oracle RAC cluster. Our configuration here uses two servers with two CPUs, 1GB RAM, two Gigabit Ethernet NICs, a dual channel SCSI host bus adapter (HBA), and eight SCSI disks connected via copper to each host (four disks per channel). The disks were configured as Just a Bunch Of Disks (JBOD)—that is, with no hardware RAID controller.
Software
At the software level, each node in a RAC cluster needs: <ol> <li>An operating system <li>Oracle Cluster Ready Services <li>Oracle RAC software, and optionally <li>An Oracle Automated Storage Management instance. </ol>
Operating System
Oracle RAC is supported on many different operating systems. This guide focuses on Linux. The operating system must be properly configured for the OS--including installing the necessary software packages, setting kernel parameters, configuring the network, establishing an account with the proper security, configuring disk devices, and creating directory structures. All these tasks are described in this guide.
Oracle Cluster Ready Services
Oracle RAC 10 g introduced Oracle Cluster Ready Services (CRS), a platform-independent set of system services for cluster environments. In prior releases of RAC and Oracle Parallel Server, Oracle relied on vendor-supplied cluster management software to provide these services. Although CRS works in concert with vendor-supplied clusterware, the only required component for Oracle RAC 10 g is CRS. Indeed, CRS must be installed prior to installing RAC.
CRS maintains two files: the Oracle Cluster Registry (OCR) and the Voting Disk. The OCR and the Voting Disk must reside on shared disks as either raw partitions or files in a cluster filesystem. This guide describes creating the OCR and Voting Disks using both methods and walks through the CRS installation.
Oracle RAC Software
Oracle RAC 10 g software is the heart of the RAC database and must be installed on each cluster node. Fortunately, the Oracle Universal Installer (OUI) does most of the work of installing the RAC software on each node. You only have to install RAC on one node—OUI does the rest.
Oracle Automated Storage Management (ASM)
ASM is a new feature in Oracle Database 10 g that provides the services of a filesystem, logical volume manager, and software RAID in a platform-independent manner. Oracle ASM can stripe and mirror your disks, allow disks to be added or removed while the database is under load, and automatically balance I/O to remove "hot spots." It also supports direct and asynchronous I/O and implements the Oracle Data Manager API (simplified I/O system call interface) introduced in Oracle9i.
Oracle ASM is not a general-purpose filesystem and can be used only for Oracle data files, redo logs, control files, and the RMAN Flash Recovery Area. Files in ASM can be created and named automatically by the database (by use of the Oracle Managed Files feature) or manually by the DBA. Because the files stored in ASM are not accessible to the operating system, the only way to perform backup and recovery operations on databases that use ASM files is through Recovery Manager (RMAN).
ASM is implemented as a separate Oracle instance that must be up if other databases are to be able to access it. Memory requirements for ASM are light: only 64MB for most systems. In Oracle RAC environments, an ASM instance must be running on each cluster node.
<hr>
Part I: Installing Linux
Install and Configure Linux as described in the
first guide in this series. You will need three IP addresses for each server: one for the private network, one for the public network, and one for the virtual IP address. Use the operating system's network configuration tools to assign the private and public network addresses. Do not assign the virtual IP address using the operating system's network configuration tools; this will be done by the Oracle Virtual IP Configuration Assistant (VIPCA) during Oracle RAC software installation.
(A note about orarun.rpm for Novell SUSE environments: Novell provides a packaged called orarun.rpm that is designed to simplify the installation and administration of Oracle products on SLES. While it is an excellent tool, its use requires a different set of installation steps. This guide forgoes the use of orarun.rpm in favor of a uniform set of installation instructions that apply to both SUSE and Red Hat.)
Red Hat Enterprise Linux 4 (RHEL4)
Required Kernel:
2.6.9-5.EL or higher
Verify kernel version: <pre># uname -r 2.6.9-5.ELsmp




Verify installed packages:

# rpm -q make gcc compat-db
make-3.80-5
gcc-3.4.3-9.EL4
compat-db-4.1.25-9
Red Hat Enterprise Linux 3 (RHEL3)

Required Kernel:
2.4.21-4.EL or higher

Verify kernel version:

# uname -r
2.4.21-4.0.1.ELsmp











Verify installed packages:

# rpm -q make binutils gcc compat-db compat-gcc compat-gcc-c++ compat-libstdc++ 
  compat-libstdc++-devel openmotif setarch
make-3.79.1-17
binutils-2.14.90.0.4-26
gcc-3.2.3-20
compat-db-4.0.14-5
compat-gcc-7.3-2.96.122
compat-gcc-c++-7.3-2.96.122
compat-libstdc++-7.3-2.96.122
compat-libstdc++-devel-7.3-2.96.122
openmotif-2.2.2-16
setarch-1.3-1
Red Hat Enterprise Linux 2.1
Required Kernel:
2.4.9-e.25 or higher
# uname -r
2.4.9-e.27smp






Verify installed packages:

# rpm -q gcc make binutils openmotif glibc
gcc-2.96-118.7.2
make-3.79.1-8
binutils-2.11.90.0.8-12
openmotif-2.1.30-11
glibc-2.2.4-32.8
SUSE Linux Enterprise Server 9 (SLES9)

Required Package Sets:
Basis Runtime System
YaST
Graphical Base System
Linux Tools
KDE Desktop Environment
C/C++ Compiler and Tools (not selected by default)

Do not install:
Authentication Server (NIS, LDAP, Kerberos)
 

Required Kernel:
2.6.5-7.5 or higher

Verify kernel version:

# uname -r
2.6.5-7.97-smp







Verify installed packages:

# rpm -q make gcc gcc-c++ libaio libaio-devel openmotif-libs
make-3.80-184.1
gcc-3.3.3-43.24
gcc-c++-3.3.3-43.24
libaio-0.3.98-18.3
libaio-devel-0.3.98-18.3
openmotif-libs-2.2.2-519.1
SUSE Linux Enterprise Server 8 (SLES8)

The minimum required kernel version depends upon which shared storage option you choose:

Storage Option Kernel Version(s)
Raw 2.4.21-138 or higher
ASM with Raw Devices 2.4.21-138 or higher
ASM with ASMLib 2.4.21-251 or higher
OCFS v1 (1.0.14-1) 2.4.21-266 or higher

Verify kernel version:

# uname -r
k_smp-2.4.21-215





In order to install Cluster Ready Services, you will also need the following package available from rpmfind.net ncompress-4.2.4-24.i386.rpm (or later)

The ncompress package conflicts with the gzip package, so use the --force command line option to rpm.

Example:

# rpm -ivh --force ncompress-4.2.4-36.i386.rpm
# rpm rpm -q gcc make binutils openmotif ncompress
gcc-3.2.2-38
make-3.79.1-407
binutils-2.12.90.0.15-50
openmotif-2.2.2-124
ncompress-4.2.4-36


Part II: Configure Linux for Oracle

Create the Oracle Groups and User Account

Next we'll create the Linux groups and user account that will be used to install and maintain the Oracle 10g software. The user account will be called 'oracle' and the groups will be 'oinstall' and 'dba.' Execute the following commands as root on one cluster host only:

/usr/sbin/groupadd oinstall
/usr/sbin/groupadd dba
/usr/sbin/useradd -m -g oinstall -G dba oracle
id oracle
# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd dba
# /usr/sbin/useradd -m -g oinstall -G dba oracle
# id oracle
uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba)
id oracle
/usr/sbin/groupadd -g 501 oinstall
/usr/sbin/groupadd -g 502 dba
/usr/sbin/useradd -m -u 501 -g oinstall -G dba oracle
# /usr/sbin/groupadd -g 501 oinstall
# /usr/sbin/groupadd -g 502 dba
# /usr/sbin/useradd -m -u 501 -g oinstall -G dba oracle
# id oracle
uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba)
# passwd oracle
Changing password for user oracle.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Create Mount Points

gOracle Database 10g Installation Guide

Issue the following commands as root:

mkdir -p /u01/app/oracle
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/app/oracle
# mkdir -p /u01/app/oracle
# chown -R oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/app/oracle
Configure Kernel Parameters

cat >> /etc/sysctl.conf >>EOF
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=262144
net.core.wmem_max=262144
EOF
/sbin/sysctl -p
/sbin/chkconfig boot.sysctl on
Setting Shell Limits for the oracle User

cat >> /etc/security/limits.conf >>EOF
oracle          soft    nproc   2047
oracle          hard    nproc   16384
oracle          soft    nofile  1024
oracle          hard    nofile  65536
EOF

cat >> /etc/pam.d/login >>EOF
session    required     /lib/security/pam_limits.so
EOF
cat >> /etc/profile >>EOF
if [ \$USER = "oracle" ]; then  
   if [ \$SHELL = "/bin/ksh" ]; then
       ulimit -p 16384
       ulimit -n 65536
   else
       ulimit -u 16384 -n 65536
   fi
   umask 022
fi
EOF

cat >> /etc/csh.login >>EOF
if ( \$USER == "oracle" ) then
   limit maxproc 16384
   limit descriptors 65536
   umask 022
endif
EOF
cat >> /etc/profile.local >>EOF
if [ \$USER = "oracle" ]; then  
   if [ \$SHELL = "/bin/ksh" ]; then
       ulimit -p 16384
       ulimit -n 65536
   else
       ulimit -u 16384 -n 65536
   fi
   umask 022
fi
EOF

cat >> /etc/csh.login.local >>EOF
if ( \$USER == "oracle" ) then
   limit maxproc 16384
   limit descriptors 65536
   umask 022
endif
EOF
SLES8 and SLES9: Avoid the Bug!

A bug in the installation of Oracle Enterprise Manager 10g on SLES8 and SLES9 will cause it to fail due to unavailable network ports. The OEM DBConsole needs port 1830 and in SLES environments this port is already reserved in /etc/services. This bug is documented on MetaLink as bug# 3513603.

g

Configure the Hangcheck Timer

All RHEL releases:

modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
cat >> /etc/rc.d/rc.local >>EOF
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
EOF
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
cat >> /etc/init.d/boot.local >>EOF
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
EOF
Configure /etc/hosts

127.0.0.1         localhost.localdomain localhost

192.168.100.51    ds1-priv.orademo.org    ds1-priv        # ds1 private
192.168.100.52    ds2-priv.orademo.org    ds2-priv        # ds1 private
192.168.200.51    ds1.orademo.org         ds1             # ds1 public
192.168.200.52    ds2.orademo.org         ds2             # ds2 public
192.168.200.61    ds1-vip.orademo.org     ds1-vip         # ds1 virtual
192.168.200.62    ds2-vip.orademo.org     ds2-vip         # ds2 virtual
Configure SSH for User Equivalence

g

mkdir ~/.ssh
chmod 755 ~/.ssh
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa
$ mkdir ~/.ssh
$ chmod 755 ~/.ssh
$ /usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
4b:df:76:77:72:ba:31:cd:c4:e2:0c:e6:ef:30:fc:37 oracle@ds1.orademo.org

$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
af:37:ca:69:3c:a0:08:97:cb:9c:0b:b0:20:70:e3:4a oracle@ds1.orademo.org

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

ssh oracle@ds2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh oracle@ds2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

chmod 644 ~/.ssh/authorized_keys
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$ ssh oracle@ds2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'ds2 (192.168.200.52)' can't be established.
RSA key fingerprint is d1:23:a7:df:c5:fc:4e:10:d2:83:60:49:25:e8:eb:11.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ds2,192.168.200.52' (RSA) to the list of known hosts.
oracle@ds2's password: 
$ ssh oracle@ds2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@ds2's password:
$ chmod 644 ~/.ssh/authorized_keys
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

ssh oracle@ds1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh oracle@ds1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

chmod 644 ~/.ssh/authorized_keys
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$ ssh oracle@ds1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'ds1 (192.168.200.51)' can't be established.
RSA key fingerprint is bd:0e:39:2a:23:2d:ca:f9:ea:71:f5:3d:d3:dd:3b:65.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ds1,192.168.200.51' (RSA) to the list of known hosts.
Enter passphrase for key '/home/oracle/.ssh/id_rsa':
$ ssh oracle@ds1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Enter passphrase for key '/home/oracle/.ssh/id_rsa':
$ chmod 644 ~/.ssh/authorized_keys
Establish User Equivalence

g

exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa:
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)

Test Connectivity

date

$ ssh ds2 date
Sun Jun 27 19:07:19 CDT 2004
The authenticity of host 'ds2 (192.168.200.52)' can't be established.
RSA key fingerprint is 8f:a3:19:76:ca:4f:71:85:42:c2:7a:da:eb:53:76:85.
Are you sure you want to continue connecting (yes/no)? yes

Part III: Prepare the Shared Disks

This section describes three methods of preparing the shared disks for use with RAC:

1. Oracle Cluster File System (OCFS)
2. Automatic Storage Manager (ASM)
3. Raw Devices

Oracle Cluster File System (OCFS) version 1

  • Oracle data files
  • Online redo logs
  • Archived redo logs
  • Control files
  • Spfiles
  • CRS shared files (Oracle Cluster Registry and CRS voting disk).

Obtain OCFS

# uname -r
2.4.21-15.0.2.ELsmp

ocfs-support-1.0.10-1.i386.rpm
ocfs-tools-1.0.10-1.i386.rpm
ocfs-2.4.21-EL-smp-1.0.12-1.i686.rpm

Install OCFS

# rpm -Uvh ocfs-support-1.0.10-1.i386.rpm \
ocfs-tools-1.0.10-1.i386.rpm \
ocfs-2.4.21-EL-smp-1.0.12-1.i686.rpm
Preparing...                ########################################### [100%]
   1:ocfs-support           ########################################### [ 33%]
   2:ocfs-tools             ########################################### [ 67%]
   3:ocfs-2.4.21-EL-smp     ########################################### [100%]
Linking OCFS module into the module path [  OK  ]
Configure OCFS

# ocfstool



Load OCFS on each node:

# /sbin/load_ocfs
/sbin/insmod ocfs node_name=ds1.orademo.org ip_address=192.168.100.51 
   cs=1795 guid=2FB60EDD8B872FC4216C00010324C023 comm_voting=1 ip_port=7000
Using /lib/modules/2.4.21-EL-smp-ABI/ocfs/ocfs.o
Warning: kernel-module version mismatch
        /lib/modules/2.4.21-EL-smp-ABI/ocfs/ocfs.o was compiled for kernel version 2.4.21-4.ELsmp
        while this kernel is version 2.4.21-15.0.2.ELsmp
Warning: loading /lib/modules/2.4.21-EL-smp-ABI/ocfs/ocfs.o will taint the kernel: forced load
  See http://www.tux.org/lkml/#export-tainted for information about tainted modules
Module ocfs loaded, with warnings

mkdir /u02
mkfs.ocfs -b 128 -L /u02 -m /u02 -p 0775 \
-u root -g root /dev/sdb1 -F
The Magic Mount

mount -t ocfs -L /u02 /u02

mount -t ocfs
/dev/sdb1 on /u02 type ocfs (rw)

df /u02
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sdb1             35557856     36064  35521792   1% /u02
umount /u02
mount -t ocfs -L /u02 /u02

LABEL=/u02         /u02           ocfs    _netdev         0 0

CRS files
mkdir /u02/oracrs
chown oracle:oinstall /u02/oracrs
chmod 775 /u02/oracrs

Database files
mkdir /u02/oradata
chown oracle:oinstall /u02/oradata
chmod 775 /u02/oradata

Oracle Automatic Storage Manager (ASM)

Install ASMLib




Linux Distro Supported Kernels
RHES 2.1 2.4.9-e.25 or higher
RHES 3 2.4.21-15 or higher
RHES 4 N/A
SLES 8 2.4.21-138
2.4.21-190
2.4.21-198
2.4-21-215
SLES 9 N/A
# uname -rm
2.4.9-e.27smp i686

  1. Point your Web browser to http://otn.oracle.com/tech/linux/asmlib/index.html.
  2. Select the link for your version of Linux.
  3. Download the oracleasmlib and oracleasm-support packages for your version of Linux.
  4. Download the oracleasm package corresponding to your kernel.

rpm -Uvh oracleasm-kernel_version-asmlib_version.cpu_type.rpm \
oracleasmlib-asmlib_version.cpu_type.rpm \
oracleasm-support-asmlib_version.cpu_type.rpm
# rpm -Uvh \
> oracleasm-2.4.9-e-smp-1.0.0-1.i686.rpm \
> oracleasmlib-1.0.0-1.i386.rpm \
> oracleasm-support-1.0.0-1.i386.rpm
Preparing...                ########################################### [100%]
   1:oracleasm-support      ########################################### [ 33%]
   2:oracleasm-2.4.9-e-smp  ########################################### [ 66%]
Linking module oracleasm.o into the module path [  OK  ]
   3:oracleasmlib           ########################################### [100%]
Configuring ASMLib

# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration            [  OK  ]
Creating /dev/oracleasm mount point                        [  OK  ]
Loading module "oracleasm"                                 [  OK  ]
Mounting ASMlib driver filesystem                          [  OK  ]
Scanning system for ASM disks                              [  OK  ]
# /etc/init.d/oracleasm enable
Writing Oracle ASM library driver configuration            [  OK  ]
Scanning system for ASM disks                              [  OK  ]
Configure Disks for ASM

/etc/init.d/oracleasm createdisk DISK_NAME device_name
# /etc/init.d/oracleasm createdisk VOL1 /dev/sdb
Marking disk "/dev/sdb" as an ASM disk                     [  OK  ]
# /etc/init.d/oracleasm createdisk VOL2 /dev/sdc
Marking disk "/dev/sdc" as an ASM disk                     [  OK  ]
.
.
.
# /etc/init.d/oracleasm listdisks
VOL1
VOL2
.
.
.
/etc/init.d/oracleasm scandisks
Raw Partitions

Partition Type Size
1 Primary 50
2 Primary 50
3 Primary 200
4 Extended -
5 Logical 200
6 Logical 200
7 Logical 200
8 Logical 200
9 Logical 200
10 Logical 600
11 Logical 600
12 Logical 600
13 Logical 1200
14 Logical 1200
15 Logical Free
16 Logical Free

# fdisk /dev/sdb

The number of cylinders for this disk is set to 4427.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 36.4 GB, 36420075008 bytes
255 heads, 63 sectors/track, 4427 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-4427, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-4427, default 4427): +50m

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (8-4427, default 8):
Using default value 8
Last cylinder or +size or +sizeM or +sizeK (8-4427, default 4427): +50m

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (15-4427, default 15):
Using default value 15
Last cylinder or +size or +sizeM or +sizeK (15-4427, default 4427): +200m

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
e
Selected partition 4
First cylinder (40-4427, default 40):
Using default value 40
Last cylinder or +size or +sizeM or +sizeK (40-4427, default 4427):
Using default value 4427

Command (m for help): n
First cylinder (40-4427, default 40):
Using default value 40
Last cylinder or +size or +sizeM or +sizeK (40-4427, default 4427): +200m

.
.
.

Command (m for help): p

Disk /dev/sdb: 36.4 GB, 36420075008 bytes
255 heads, 63 sectors/track, 4427 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdb1             1         7     56196   83  Linux
/dev/sdb2             8        14     56227+  83  Linux
/dev/sdb3            15        39    200812+  83  Linux
/dev/sdb4            40      4427  35246610    5  Extended
/dev/sdb5            40        64    200781   83  Linux
/dev/sdb6            65        89    200781   83  Linux
/dev/sdb7            90       114    200781   83  Linux
/dev/sdb8           115       139    200781   83  Linux
/dev/sdb9           140       164    200781   83  Linux
/dev/sdb10          165       238    594373+  83  Linux
/dev/sdb11          239       312    594373+  83  Linux
/dev/sdb12          313       386    594373+  83  Linux
/dev/sdb13          387       533   1180746   83  Linux
/dev/sdb14          534       680   1180746   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
partprobe
# partprobe

Purpose Minimum Size (in MB) Standardized Size (in MB) Disk Device Raw Device
Oracle Cluster Registry 100 200 /dev/sdb3 /dev/raw/raw1
Oracle CRS Voting 20 50 /dev/sdb1 /dev/raw/raw2
SYSTEM Tablespace 500 600 /dev/sdb11 /dev/raw/raw3
SYSAUX Tablespace 800 1200 /dev/sdc13 /dev/raw/raw4
UNDOTBS1 Tablespace 500 600 /dev/sdb10 /dev/raw/raw5
UNDOTBS2 Tablespace 500 600 /dev/sdc10 /dev/raw/raw6
EXAMPLE Tablespace 160 200 /dev/sdb5 /dev/raw/raw7
USERS Tablespace 120 200 /dev/sdb6 /dev/raw/raw8
TEMP Tablespace 250 600 /dev/sdc11 /dev/raw/raw9
SPFILE 5 50 /dev/sdb2 /dev/raw/raw10
Password File 5 50 /dev/sdc1 /dev/raw/raw11
Control File 1 110 200 /dev/sdb7 /dev/raw/raw12
Control File 2 110 200 /dev/sdb7 /dev/raw/raw13
Redo Log 1_1 120 200 /dev/sdb8 /dev/raw/raw14
Redo Log 1_2 120 200 /dev/sdb9 /dev/raw/raw15
Redo Log 2_1 120 200 /dev/sdb8 /dev/raw/raw16
Redo Log 2_2 120 200 /dev/sdb9 /dev/raw/raw17
Total Disk Space: 3,660 5,550

RHEL Releases

/dev/raw/raw1        /dev/sdb3
/dev/raw/raw2   /dev/sdb1
/dev/raw/raw3   /dev/sdb11
/dev/raw/raw4   /dev/sdc13
/dev/raw/raw5   /dev/sdb10
/dev/raw/raw6   /dev/sdc10
/dev/raw/raw7   /dev/sdb5
/dev/raw/raw8   /dev/sdb6
/dev/raw/raw9   /dev/sdc11
/dev/raw/raw10  /dev/sdb2
/dev/raw/raw11  /dev/sdc1
/dev/raw/raw12  /dev/sdb7
/dev/raw/raw13  /dev/sdc7
/dev/raw/raw14  /dev/sdb8
/dev/raw/raw15  /dev/sdb9
/dev/raw/raw16  /dev/sdc8
/dev/raw/raw17  /dev/sdc9

SLES 8 and SLES9

raw1:sdb3
raw2:sdb1
raw3:sdb11
raw4:sdc13
raw5:sdb10
raw6:sdc10
raw7:sdb5
raw8:sdb6
raw9:sdc11
raw10:sdb2
raw11:sdc1
raw12:sdb7
raw13:sdc7
raw14:sdb8
raw15:sdb9
raw16:sdc8
raw17:sdc9
chown root:oinstall /dev/raw/raw[12]
chmod 660 /dev/raw/raw[12]
chown oracle:oinstall /dev/raw/raw[3-9]
chown oracle:oinstall /dev/raw/raw1[0-7]
chmod 660 /dev/raw/raw[3-9]
chmod 660 /dev/raw/raw1[0-7]
diskusermod –G dba,disk oracle

Restart the raw device service.

RHEL2/3
/sbin/service rawdevices restart

SLES8/9
/etc/init.d/raw start
chkconfig raw on

Purpose Raw Device Filename
Oracle Cluster Registry /dev/raw/raw1 /u02/oracrs/ocr.crs
Oracle CRS Voting /dev/raw/raw2 /u02/oracrs/vote.crs
SYSTEM Tablespace /dev/raw/raw3 /u02/oradata/gemni/system_01.dbf
SYSAUX Tablespace /dev/raw/raw4 /u02/oradata/gemni/sysaux_01.dbf
UNDOTBS1 Tablespace /dev/raw/raw5 /u02/oradata/gemni/undo1_01.dbf
UNDOTBS2 Tablespace /dev/raw/raw6 /u02/oradata/gemni/undo2_01.dbf
EXAMPLE Tablespace /dev/raw/raw7 /u02/oradata/gemni/example_01.dbf
USERS Tablespace /dev/raw/raw8 /u02/oradata/gemni/users_01.dbf
TEMP Tablespace /dev/raw/raw9 /u02/oradata/gemni/temp_01.dbf
SPFILE /dev/raw/raw10 u01/oradata/gemni/spfilegemni.ora
Password File /dev/raw/raw11 /u01/oradata/gemni/orapwgemni
Control File 1 /dev/raw/raw12 /u01/oradata/gemni/control.ctl
Control File 2 /dev/raw/raw13 /u02/oradata/gemni/control.ctl
Redo Log 1_1 /dev/raw/raw14 /u01/oradata/gemni/redo1_1.log
Redo Log 1_2 /dev/raw/raw15 /u01/oradata/gemni/redo1_2.log
Redo Log 2_1 /dev/raw/raw16 /u01/oradata/gemni/redo2_1.log
Redo Log 2_2 /dev/raw/raw17 /u01/oradata/gemni/redo2_2.log

CRS files
mkdir -p /u02/oracrs
chown -R oracle:oinstall /u02/oracrs
chmod -R 775 /u02/oracrs

Database files
mkdir -p /u01/oradata/gemni /u02/oradata/gemni
chown -R oracle:oinstall /u0[12]/oradata
chmod -R 775 /u0[12]/oradata
ln -s /dev/raw/raw1 /u02/oracrs/ocr.crs
ln -s /dev/raw/raw2 /u02/oracrs/vote.crs
ln -s /dev/raw/raw3 /u02/oradata/gemni/system_01.dbf
ln -s /dev/raw/raw4 /u02/oradata/gemni/sysaux_01.dbf
ln -s /dev/raw/raw5 /u02/oradata/gemni/undo1_01.dbf
ln -s /dev/raw/raw6 /u02/oradata/gemni/undo2_01.dbf
ln -s /dev/raw/raw7 /u02/oradata/gemni/example_01.dbf
ln -s /dev/raw/raw8 /u02/oradata/gemni/users_01.dbf
ln -s /dev/raw/raw9 /u02/oradata/gemni/temp_01.dbf
ln -s /dev/raw/raw10 /u01/oradata/gemni/spfilegemni.ora 
ln -s /dev/raw/raw11 /u01/oradata/gemni/orapwgemni 
ln -s /dev/raw/raw12 /u01/oradata/gemni/control.ctl
ln -s /dev/raw/raw13 /u02/oradata/gemni/control.ctl
ln -s /dev/raw/raw14 /u01/oradata/gemni/redo1_1.log
ln -s /dev/raw/raw15 /u01/oradata/gemni/redo1_2.log
ln -s /dev/raw/raw16 /u02/oradata/gemni/redo2_1.log
ln -s /dev/raw/raw17 /u02/oradata/gemni/redo2_2.log
cat > $HOME/gemni_raw.conf << EOF
system=/u02/oradata/gemni/system_01.dbf
sysaux=/u02/oradata/gemni/sysaux_01.dbf
example=/u02/oradata/gemni/example_01.dbf
users=/u02/oradata/gemni/users_01.dbf
temp=/u02/oradata/gemni/temp_01.dbf
undotbs1=/u02/oradata/gemni/undo1_01.dbf
undotbs2=/u02/oradata/gemni/undo2_01.dbf
redo1_1=/u01/oradata/gemni/redo1_1.log
redo1_2=/u01/oradata/gemni/redo1_2.log
redo2_1=/u02/oradata/gemni/redo2_1.log
redo2_2=/u02/oradata/gemni/redo2_2.log
control1=/u01/oradata/gemni/control.ctl
control2=/u02/oradata/gemni/control.ctl
spfile=/u01/oradata/gemni/spfilegemni.ora
pwdfile=/u01/oradata/gemni/orapwgemni
EOF
export DBCA_RAW_CONFIG=$HOME/gemni_raw.conf

Now you are ready to install Oracle CRS, install the Oracle Database software, and create the Oracle RAC database.


Part IV: Install Oracle Software

Important Note (for RHEL4 only): The Oracle 10g OUI will check the operating system release to verify that it is a supported release. As of Oracle Database 10.1.0.3, the installer does not recognize RHEL4 as a supported release. As a workaround, follow the steps below prior to running runInstaller.

cp /etc/redhat-release /etc/redhat-release.orig
cat > /etc/redhat-release << EOF
Red Hat Enterprise Linux AS release 3 (Taroon)
EOF
Establish User Equivalency

exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa:
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.1.0/crs_1

RHEL4 and SLES9 Only
export LD_ASSUME_KERNEL=2.4.21
Install Oracle CRS

g

Creating Oracle CRS Files on Raw Devices

Part III

Creating CRS Files on a Cluster Filesystem

Part III

Installing Oracle CRS

g

  1. Welcome; Click on Next
  2. Specify Inventory Directory and Credentials - The defaults should be correct; make sure that the inventory directory is located in your ORACLE_BASE directory (ex: /u01/app/oracle/oraInventory) and that the operating system group is "oinstall" run orainstRoot.sh on the installation node (ds1) (make sure that your host is configured in /etc/hosts and not just in DNS)
  3. Specify File Locations - Verify the defaults and continue
  4. Language Selection - Verify the default and continue
  5. Cluster Configuration - Enter the cluster name (or accept the default of "crs");

  6. Private Interconnect Enforcement - Specify the Interface Type (public, private, or "do no use") for each interface

  7. Oracle Cluster Registry - enter the public and private node names for each node in the clusterSpecify OCR Location (ex: /u02/oracrs/ocr.crs)
  8. Voting Disk - Enter voting disk name (ex: /u02/oracrs/vote.crs)
  9. Run orainstRoot.sh on remaining nodes in the cluster
  10. Summary - Click Install When prompted, run root.sh in the Oracle CRS home directory (ex: /u01/app/oracle/product/10.1.0/crs_1/root.sh) on each node one at a time, starting with the installation node. Do not run the scripts simultaneously. Wait for one to finish before starting another.
End of CRS Installation

Verify that the installation succeeded by running olsnodes from the $ORACLE_BASE/product/10.1.0/crs_1/bin directory; for example:

$ cd $ORACLE_BASE/product/10.1.0/crs_1/bin
$ olsnodes
ds1
ds2

Install Oracle Database Software

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.1.0/db_1

RHES4 and SLES9 Only
export LD_ASSUME_KERNEL=2.4.21
  1. runInstaller from db CD
  2. Welcome - Click on Next
  3. Specify File Locations - Verify the defaults and continue
  4. Specify Hardware Cluster Installation Mode - Select Cluster Installation and select other nodes in the cluster
  5. Select Installation Type - Enterprise Edition
  6. Product-specific Prerequisite Checks - All OK; SLES9 OK to ignore openmotif-2.1.30-11 warning
  7. Select Database Configuration - Choose "Do not create a starter database." (We will create the database in a separate step using the Database Configuration Assistant (DBCA).)
Summary

While logged in as root, run root.sh in the Oracle database home directory (ex: /u01/app/oracle/product/10.1.0/db_1/root.sh) on each host one at a time, starting with the installation node. Do not run the scripts simultaneously. Wait for one to finish before starting another.

  1. Welcome - Click on Next
  2. Network Interfaces - Select the interface for the public network only (eth0 in this example). The interface must be identical on all hosts in the cluster. (If it is eth0 on the installation host, it must be eth0 on all other hosts in the cluster.)
  3. Virtual IPs for cluster nodes - Enter the virtual IP alias (host name) and virtual IP address for each host that was configured in your DNS.

  4. Summary - Click on Finish. VIP Configuration assistant creates and starts VIP, GSD, and ONS application resources

  5. Configuration Results - Review the results and click on Exit. Run root.sh on other nodes, one at a time.
  6. End of Installation

Create the Oracle RAC Database

DBCA—Cluster Filesystem
While logged in as oracle, set the environment and then run dbca.

Ex:
$ . oraenv
ORACLE_SID = [oracle] ? *
$ dbca
  1. Welcome - Select "Oracle Real Application Clusters database"
  2. Operations - Create a database
  3. Node Selection - Click on Select All (ds1 and ds2)
  4. Database Templates - General Purpose
  5. Database Identification - Global Database Name: gemni.orademo.org
  6. Management Options - Configure the Database with Enterprise Manager; Use Database Control for Database Management
  7. Database Credentials - Use the Same Password for All Accounts; enter password and again to confirm
  8. Storage Options - Cluster File system
  9. Database File Locations - Use Common Location for All Database Files (/u02/oradata)
  10. Recovery Configuration - Click on Next
  11. Database Content - Sample Schemas
  12. Initialization Parameters - Memory, Typical
  13. Database Storage - Click on Next
  14. Create Options - Select "Create Database"
  15. Summary - Review summary and click on OK
DBCA—Oracle ASM
Ex:
$ . oraenv
ORACLE_SID = [oracle] ? *
$ dbca
  1. Welcome - Select "Oracle Real Application Clusters database"
  2. Operations - Create a database
  3. Node Selection - Click on Select All (ds1 and ds2)
  4. Database Templates - General Purpose
  5. Database Identification - Global Database Name: gemni.orademo.org
  6. Management Options - Configure the Database with Enterprise Manager; Use Database Control for Database Management
  7. Database Credentials - Use the Same Password for All Accounts; enter password and again to confirm
  8. Storage Options - Automatic Storage Management (ASM)
  9. Create ASM Instance - SYS Password and confirm; create initialization parameter file (IFILE)
  10. ASM Disk Groups - Create New>Create Disk Group>Disk Group Name (DATA); Redundancy (Normal); Change Disk Discovery Path (ORCL:* - Still have to enter this even if the disks show up as provisioned, get error otherwise); Select Disks; Enter Failure Group Names; Select newly created disk group
  11. Database File Locations - Use Oracle-Managed Files
  12. Recovery Configuration - Click on Next
  13. Database Content - Sample Schemas
  14. Initialization Parameters - Memory, Typical
  15. Database Storage - Click on Next
  16. Create Options - Select "Create Database"
  17. Summary - Review summary and click on OK
DBCA � Raw Devices
Ex:
$ . oraenv
ORACLE_SID = [oracle] ? *
$ dbca
  1. Welcome - Select "Oracle Real Application Clusters database"
  2. Operations - Create a database
  3. Node Selection - Click on Select All (ds1 and ds2)
  4. Database Templates - General Purpose
  5. Database Identification - Global Database Name: gemni.orademo.org
  6. Management Options - Configure the Database with Enterprise Manager; Use Database Control for Database Management
  7. Database Credentials - Use the Same Password for All Accounts; enter password and again to confirm
  8. Storage Options - Raw Devices; Specify Raw Devices Mapping File (/home/oracle/gemni_raw.conf)
  9. Recovery Configuration - Click on Next
  10. Database Content - Sample Schemas
  11. Initialization Parameters - Memory, Typical
  12. Database Storage - Click on Next
  13. Create Options - Select "Create Database"
  14. Summary - Review summary and click on OK


Conclusion

Now that your database is up and running, you can begin exploring the many new features offered in Oracle Database 10g. A great place to start is Oracle Enterprise Manager, which has been completely re-written with a crisp new Web-based interface. If you're unsure where to begin, the Oracle Database 10g Concepts Guide and the 2-Day DBA Guide will help familiarize you with your new database. OTN also has a number of guides designed to help you get the most out of 10g. One of my favorites is the series by Arup Nanda, Oracle Database 10g: The Top 20 Features for DBAs.


smileyj@tusc.com