| DBA: Linux
Installing Oracle RAC 10g Release 2 on Linux x86
by John Smiley
Learn the basics of installing Oracle RAC 10g Release 2 on Red Hat Enterprise Linux or Novell SUSE Enterprise Linux, from the bare metal up (for evaluation purposes only)
This guide provides a walkthrough of installing an Oracle Database 10g Release 2 RAC database on commodity hardware for the purpose of evaluation . If you are new to Linux and/or Oracle, this guide is for you. It starts with the basics and walks you through an installation of Oracle Database 10g Release 2 RAC from the bare metal up.
This guide will take the approach of offering the easiest paths, with the fewest number of steps, for accomplishing a task. This approach often means making configuration choices that would be inappropriate for anything other than an evaluation. For that reason, this guide is not appropriate for building production-quality environments, nor does it reflect best practices.
The three Linux distributions certified for Oracle 10g Release 2 RAC are:
This guide is divided into four parts: Part I covers the installation of the Linux operating system, Part II covers configuring Linux for Oracle, Part III discusses the essentials of partitioning shared disk, and Part IV covers installation of the Oracle software.
A Release 1 version of this guide is also available.
The illustration below shows the major components of an Oracle RAC 10g Release 2 configuration. Nodes in the cluster are typically separate servers (hosts).
Shared Disk Storage
The private network is typically built with Gigabit Ethernet, but for high-volume environments, many vendors offer proprietary low-latency, high-bandwidth solutions specifically designed for Oracle RAC. Linux also offers a means of bonding multiple physical NICs into a single virtual NIC (not covered here) to provide increased bandwidth and availability.
Configuring the Cluster Hardware
Oracle Cluster Ready Services becomes Oracle Clusterware
Clusterware maintains two files: the Oracle Cluster Registry (OCR) and the Voting Disk. The OCR and the Voting Disk must reside on shared disks as either raw partitions or files in a cluster filesystem. This guide describes creating the OCR and Voting Disks using a cluster filesystem (OCFS2) and walks through the CRS installation.
Oracle RAC Software
Oracle Automatic Storage Management (ASM)
Oracle ASM is not a general-purpose filesystem and can be used only for Oracle data files, redo logs, control files, and the RMAN Flash Recovery Area. Files in ASM can be created and named automatically by the database (by use of the Oracle Managed Files feature) or manually by the DBA. Because the files stored in ASM are not accessible to the operating system, the only way to perform backup and recovery operations on databases that use ASM files is through Recovery Manager (RMAN).
ASM is implemented as a separate Oracle instance that must be up if other databases are to be able to access it. Memory requirements for ASM are light: only 64MB for most systems. In Oracle RAC environments, an ASM instance must be running on each cluster node.
Part I: Installing Linux
Install and Configure Linux as described in the first guide in this series. You will need three IP addresses for each server: one for the private network, one for the public network, and one for the virtual IP address. Use the operating system's network configuration tools to assign the private and public network addresses. Do not assign the virtual IP address using the operating system's network configuration tools; this will be done by the Oracle Virtual IP Configuration Assistant (VIPCA) during Oracle RAC software installation.
Red Hat Enterprise Linux 4 (RHEL4)
Verify kernel version:
# uname -rOther required package versions (or higher):
Verify installed packages:
# rpm -q binutils compat-db control-center gcc gcc-c++ glibc glibc-common \SUSE Linux Enterprise Server 9 (SLES9)
Required Package Sets:
Do not install:
Verify kernel version:
# uname -rOther required package versions (or higher):
Verify installed packages:
# rpm -q gcc gcc-c++ glibc libaio libaio-devel make openmotif-libs
Part II: Configure Linux for Oracle
Create the Oracle Groups and User Account
# /usr/sbin/groupadd oinstallThe User ID and Group IDs must be the same on all cluster nodes. Using the information from the id oracle command, create the Oracle Groups and User Account on the remaining cluster nodes:
/usr/sbin/groupadd -g 501 oinstallEx:
# /usr/sbin/groupadd -g 501 oinstallSet the password on the oracle account:
# passwd oracleCreate Mount Points
Now create mount points to store the Oracle 10g Release 2 software. This guide will adhere to the Optimal Flexible Architecture (OFA) for the naming conventions used in creating the directory structure. For more information on OFA standards, see Appendix D of the Oracle Database 10g Release 2 Installation Guide .
Issue the following commands as root:
mkdir -p /u01/app/oracleEx:
# mkdir -p /u01/app/oracleConfigure Kernel Parameters
Login as root and configure the Linux kernel parameters on each node.
cat >> /etc/sysctl.conf << EOFOn SUSE Linux Enterprise Server 9.0 only:
Set the kernel parameter
disable_cap_mlock = 1Run the following command after completing the steps above:
/sbin/chkconfig boot.sysctl onSetting Shell Limits for the oracle User
Oracle recommends setting the limits to the number of processes and number of open files each Linux account may use. To make these changes, cut and past the following commands as root.
cat >> /etc/security/limits.conf << EOFFor Red Hat Enterprise Linux releases, use the following:
cat >> /etc/profile << EOFFor Novell SUSE releases, use the following:
cat >> /etc/profile.local << EOFConfigure the Hangcheck Timer
All RHEL releases:
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180All SLES releases:
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180Configure /etc/hosts
Some Linux distributions associate the host name with the loopback address (127.0.0.1). If this occurs, remove the host name from the loopback address.
/etc/hosts file used for this walkthrough:
127.0.0.1 localhost.localdomain localhostConfigure SSH for User Equivalence
During the installation of Oracle RAC 10g Release 2, OUI needs to copy files to and execute programs on the other nodes in the cluster. In order to allow OUI to do that, you must configure SSH to allow user equivalence. Establishing user equivalence with SSH provides a secure means of copying files and executing programs on other nodes in the cluster without requiring password prompts.
The first step is to generate public and private keys for SSH. There are two versions of the SSH protocol; version 1 uses RSA and version 2 uses DSA, so we will create both types of keys to ensure that SSH can use either version. The ssh-keygen program will generate public and private keys of either type depending upon the parameters passed to it.
When you run ssh-keygen, you will be prompted for a location to save the keys. Just press Enter when prompted to accept the default. You will then be prompted for a passphrase. Enter a password that you will remember and then enter it again to confirm. When you have completed the steps below, you will have four files in the ~/.ssh directory: id_rsa, id_rsa.pub, id_dsa, and id_dsa.pub. The id_rsa and id_dsa files are your private keys and must not be shared with anyone. The id_rsa.pub and id_dsa.pub files are your public keys and must be copied to each of the other nodes in the cluster.
From each node, logged in as oracle:
mkdir ~/.sshCut and paste the following line separately:
/usr/bin/ssh-keygen -t dsaEx:
$ mkdir ~/.sshNow the contents of the public key files id_rsa.pub and id_dsa.pub on each node must be copied to the ~/.ssh/authorized_keys file on every other node. Use ssh to copy the contents of each file to the ~/.ssh/authorized_keys file. Note that the first time you access a remote node with ssh its RSA key will be unknown and you will be prompted to confirm that you wish to connect to the node. SSH will record the RSA key for the remote nodes and will not prompt for this on subsequent connections to that node.
From the first node ONLY, logged in as oracle (copy the local account's keys so that ssh to the local node will work):
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysNow copy the keys to the other node so that we can ssh to the remote node without being prompted for a password.
ssh oracle@ds2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys(If you are cut and pasting these commands, run each of them separately. SSH will prompt for the oracle password each time and if the commands are pasted at the same time, the other commands will be lost when the first one flushes the input buffer prior to prompting for the password.)
ssh oracle@ds2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keysEx:
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysNow do the same for the second node. Notice that this time SSH will prompt for the passphrase you used when creating the keys rather than the oracle password. This is because the first node (ds1) now knows the public keys for the second node and SSH is now using a different authentication protocol. Note, if you didn't enter a passphrase when creating the keys with ssh-keygen, you will not be prompted for one here.
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysEx:
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysEstablish User Equivalence
Finally, after all of the generating of keys, copying of files, and repeatedly entering passwords and passphrases (isn't security fun?), you're ready to establish user equivalence. When user equivalence is established, you won't be prompted for a password again.
As oracle on the node where the Oracle 10g Release 2 software will be installed (ds1):
exec /usr/bin/ssh-agent $SHELLEx:
$ exec /usr/bin/ssh-agent $SHELL(Note that user equivalence is established for the current session only. If you switch to a different session or log out and back in, you will have to run ssh-agent and ssh-add again to re-establish user equivalence.)
If everything is set up correctly, you can now use ssh to log in, execute programs, and copy files on the other cluster nodes without having to enter a password. Verify user equivalence by running a simple command like date on a remote cluster node:
$ ssh ds2 dateIt is crucial that you test connectivity in each direction from all servers. That will ensure that messages like the one below do not occur when the OUI attempts to copy files during CRS and database software installation. This message will only occur the first time an operation on a remote node is performed, so by testing the connectivity, you not only ensure that remote operations work properly, you also complete the initial security key exchange.
The authenticity of host 'ds2 (192.168.200.52)' can't be established.
Part III: Prepare the Shared Disks
Both Oracle Clusterware and Oracle RAC require access to disks that are shared by each node in the cluster. The shared disks must be configured using one of the following methods. Note that you cannot use a "standard" filesystem such as ext3 for shared disk volumes since such file systems are not cluster aware.
For RAC database storage:
This guide covers installations using OCFS2 and ASM. If you have a small number of shared disks, you may wish to use OCFS2 for both Oracle Clusterware and the Oracle RAC database files. If you have more than a few shared disks, consider using ASM for Oracle RAC database files for the performance benefits ASM provides. Note that ASM cannot be used to store Oracle Clusterware files since Clusterware must be installed before ASM (ASM depends upon the services of Oracle Clusterware). This guide uses OCFS2 for Oracle Clusterware files.
Partition the Disks
In order to use either OCFS2 or ASM, you must have unused disk partitions available. This section describes how to create the partitions that will be used for OCFS2 and for ASM.
WARNING: Improperly partitioning a disk is one of the surest and fastest ways to wipe out everything on your hard disk. If you are unsure how to proceed, stop and get help, or you will risk losing data.
This example uses /dev/sdb (an empty SCSI disk with no existing partitions) to create a single partition for the entire disk (36 GB).
Now verify the new partition:
Repeat the above steps for each disk to be partitioned. Disk partitioning should be done from one node only. When finished partitioning, run the 'partprobe' command as root on each of the remaining cluster nodes in order to assure that the new partitions are configured.
Oracle Cluster File System (OCFS) Release 2OCFS2 is a general-purpose cluster file system that can be used to store Oracle Clusterware files, Oracle RAC database files, Oracle software, or any other types of files normally stored on a standard filesystem such as ext3. This is a significant change from OCFS Release 1, which only supported Oracle Clusterware files and Oracle RAC database files.
OCFS2 is available free of charge from Oracle as a set of three RPMs: a kernel module, support tools, and a console. There are different kernel module RPMs for each supported Linux kernel so be sure to get the OCFS2 kernel module for your Linux kernel. OCFS2 kernel modules may be downloaded from http://oss.oracle.com/projects/ocfs2/files/ and the tools and console may be downloaded from http://oss.oracle.com/projects/ocfs2-tools/files/.
To determine the kernel-specific module that you need, use uname -r.
# uname -rFor this example I downloaded:
Install OCFS2 as root on each cluster node
# rpm -ivh ocfs2console-1.0.3-1.i386.rpm \Configure OCFS2
Run ocfs2console as root:
# ocfs2consoleSelect Cluster → Configure Nodes
Click on Add and enter the Name and IP Address of each node in the cluster
Once all of the nodes have been added, click on Cluster --> Propagate Configuration. This will copy the OCFS2 configuration file to each node in the cluster. You may be prompted for root passwords as ocfs2console uses ssh to propagate the configuration file. Leave the OCFS2 console by clicking on File --> Quit. It is possible to format and mount the OCFS2 partitions using the ocfs2console GUI; however, this guide will use the command line utilities.Enable OCFS2 to start at system boot:
As root, execute the following command on each cluster node to allow the OCFS2 cluster stack to load at boot time:
# /etc/init.d/o2cb enable Writing O2CB configuration: OK Loading module "configfs": OK Mounting configfs filesystem at /config: OK Loading module "ocfs2_nodemanager": OK Loading module "ocfs2_dlm": OK Loading module "ocfs2_dlmfs": OK Mounting ocfs2_dlmfs filesystem at /dlm: OK Starting cluster ocfs2: OKCreate a mount point for the OCFS filesystem
As root on each of the cluster nodes, create the mount point directory for the OCFS2 filesystem
Ex:Create the OCFS2 filesystem on the unused disk partition
The example below creates an OCFS2 filesystem on the unused /dev/sdc1 partition with a volume label of "/u03" (-L /u03), a block size of 4K (-b 4K) and a cluster size of 32K (-C 32K) with 4 node slots (-N 4). See the OCFS2 Users Guide for more information on mkfs.ocfs2 command line options.
# mkfs.ocfs2 -b 4K -C 32K -N 4 -L /u03 /dev/sdc1
Mount the OCFS2 filesystem
Since this filesystem will contain the Oracle Clusterware files and Oracle RAC database files, we must ensure that all I/O to these files uses direct I/O (O_DIRECT). Use the "datavolume" option whenever mounting the OCFS2 filesystem to enable direct I/O. Failure to do this can lead to data loss in the event of system failure.
Ex:Notice that the mount command uses the filesystem label (-L u03) used during the creation of the filesystem. This is a handy way to refer to the filesystem without having to remember the device name.
To verify that the OCFS2 filesystem is mounted, issue the mount command or run df:
# mount -t ocfs2The OCFS2 filesystem can now be mounted on the other cluster nodes.
To automatically mount the OCFS2 filesystem at system boot, add a line similar to the one below to /etc/fstab on each cluster node:
LABEL=/u03 /u03 ocfs2 _netdev,datavolume,nointr 0 0Create the directories for shared files
CRS filesAutomatic Storage Management (ASM)
ASM was a new storage option introduced with Oracle Database 10gR1 that provides the services of a filesystem, logical volume manager, and software RAID in a platform-independent manner. ASM can stripe and mirror your disks, allow disks to be added or removed while the database is under load, and automatically balance I/O to remove "hot spots." It also supports direct and asynchronous I/O and implements the Oracle Data Manager API (simplified I/O system call interface) introduced in Oracle9i.
ASM is not a general-purpose filesystem and can be used only for Oracle data files, redo logs, control files, and flash recovery area. Files in ASM can be created and named automatically by the database (by use of the Oracle Managed Files feature) or manually by the DBA. Because the files stored in ASM are not accessible to the operating system, the only way to perform backup and recovery operations on databases that use ASM files is through Recovery Manager (RMAN).
ASM is implemented as a separate Oracle instance that must be up if other databases are to be able to access it. Memory requirements for ASM are light: only 64 MB for most systems.
On Linux platforms, ASM can use raw devices or devices managed via the ASMLib interface. Oracle recommends ASMLib over raw devices for ease-of-use and performance reasons. ASMLib 2.0 is available for free download from OTN. This section walks through the process of configuring a simple ASM instance by using ASMLib 2.0 and building a database that uses ASM for disk storage.
Determine Which Version of ASMLib You Need
ASMLib 2.0 is delivered as a set of three Linux packages:
First, determine which kernel you are using by logging in as root and running the following command:
The example shows that this is a 2.6.9-22 kernel for an SMP (multiprocessor) box using Intel i686 CPUs.
Use this information to find the correct ASMLib packages on OTN:
Before using ASMLib, you must run a configuration script to prepare the driver. Run the following command as root, and answer the prompts as shown in the example below. Run this on each node in the cluster.
# /etc/init.d/oracleasm configure
Next you tell the ASM driver which disks you want it to use. Oracle recommends that each disk contain a single partition for the entire disk. See Partitioning the Disks at the beginning of this section for an example of creating disk partitions.
You mark disks for use by ASMLib by running the following command as root from one of the cluster nodes:
/etc/init.d/oracleasm createdisk DISK_NAME device_name
Tip: Enter the DISK_NAME in UPPERCASE letters.
Verify that ASMLib has marked the disks:
# /etc/init.d/oracleasm listdisksOn all other cluster nodes, run the following command as root to scan for configured ASMLib disks:
Part IV: Install Oracle Software
Oracle Database 10g Release 2 can be downloaded from OTN. Oracle offers a development and testing license free of charge. However, no support is provided and the license does not permit production use. A full description of the license agreement is available on OTN.
The easiest way to make the Oracle Database 10g Release 2 distribution media available on your server is to download them directly to the server.
Use the graphical login to log in as oracle.
Create a directory to contain the Oracle Database 10g Release 2 distribution:
To download Oracle Database 10g Release 2 from OTN, point your browser (Firefox works well) to http://www.oracle.com/technology/software/products/database/oracle10g/htdocs/10201linuxsoft.html. Fill out the Eligibility Export Restrictions page, and read the OTN License agreement. If you agree with the restrictions and the license agreement, click on I Accept.
Click on the 10201_database_linux32.zip link, and save the file in the directory you created for this purpose —if you have not already logged in to OTN, you may be prompted to do so at this point.
Since you will be creating a RAC database, you will also need to download and install Oracle Clusterware Release 2. Click on the 10201_clusterware_linux32.zip link and save the file.
Unzip and extract the files:
Establish User Equivalency and Set Environment Variables
If you have not already done so, login as oracle and establish user equivalency between nodes:
exec /usr/bin/ssh-agent $SHELL
Set the ORACLE_BASE environment variable:
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
Install Oracle Clusterware
Before installing the Oracle RAC 10g Release 2 database software, you must first install Oracle Clusterware. Oracle Clusterware requires two files to be shared among all of the nodes in the cluster: the Oracle Cluster Registry (100MB) and the Voting Disk (20MB). These files may be stored on raw devices or on a cluster filesystem. (NFS is also supported for certified NAS systems, but that is beyond the scope of this guide.) Oracle ASM may not be used for these files because ASM is dependent upon services provided by Clusterware. This guide will use OCFS2 as a cluster filesystem to store the Oracle Cluster Registry and Voting Disk files.
Start the installation using "runInstaller" from the "clusterware" directory:
Verify that the installation succeeded by running olsnodes from the $ORACLE_BASE/product/crs/bin directory; for example:
$ /u01/app/oracle/product/crs/bin/olsnodesOnce Oracle Clusterware is installed and operating, it's time to install the rest of the Oracle RAC software.
Create the ASM Instance
If you are planning to use OCFS2 for database storage, skip this section and continue with Create the RAC Database. If you plan to use Automatic Storage Management (ASM) for database storage, follow the instructions below to create an ASM instance on each cluster node. Be sure you have installed the ASMLib software as described earlier in this guide before proceeding.
Start the installation using "runInstaller" from the "database" directory:
Create the RAC Database
Start the installation using "runInstaller" from the "database" directory:
Now that your database is up and running, you can begin exploring the many new features offered in Oracle Database 10g Release 2. A great place to start is Oracle Enterprise Manager, which has been completely re-written with a crisp new Web-based interface. If you're unsure where to begin, the Oracle Database Concepts 10g Release 2 and the 2-Day DBA Guide will help familiarize you with your new database. OTN also has a number of guides designed to help you get the most out of Oracle Database 10g Release 2.
John Smiley [ email@example.com] works as a senior database engineer for a major online retailer and is an Oracle Certified Master DBA with over 19 years of experience with Oracle databases running on all major platforms. He specializes in engineering high-volume Oracle databases, advanced performance tuning methods, and RAC, and enjoys developing with PL/SQL, C, and Perl.