What You See Is What You Get Element

Lab: How to Deploy a Four-Node Oracle RAC 12c Cluster in Minutes 

Using Oracle VM Templates

by Olivier Canonge with contributions from Christophe Pauliat, Simon Coter, Saar Maoz, Doan Nguyen, Ludovic Sorriaux, Cecile Naud, and Robbie De Meyer

This hands-on lab demonstrates how to use Oracle VM Templates to  virtualize and deploy complex Oracle Applications in minutes. Step-by-step instructions demonstrate how to download and import templates and deploy the applications, using an Oracle RAC template as an example.


Published January 2014


Want to comment on this article? Post the link on Facebook's OTN Garage page.  Have a similar article to share? Bring it up on Facebook or Twitter and let's discuss.
Table of Contents
Lab Objective
Preparation
What You Will Learn: Summary of Lab Steps
Global Architecture
Detailed Instructions for the Lab
   Install Oracle VM Manager in Oracle VM VirtualBox
   Install Oracle VM Server in Oracle VM VirtualBox
   Start Both Servers (Virtual Machines)
   Connect to the Oracle VM Manager Console
   Discover the Oracle VM Server
   Configure the Network
   Create Virtual Network Interface Cards
   Create a Server Pool
   Create a Storage Repository
   Import the Oracle VM Templates
   Clone Four Virtual Machines from the Template
   Create a Shared Disk for the Oracle ASM Configuration
   Use Deploycluster to Start the Virtual Machines as Oracle RAC Nodes
Summary
Appendix A: Oracle Flex ASM and Flex Cluster
See Also
About the Author
Acknowledgments

Lab Objective

This document is an adaptation of Hands-On Lab 9982, which was run during the Oracle OpenWorld 2013 sessions.

Note: During this lab at Oracle OpenWorld, a demo environment built on a single x86 laptop was used. However, you can run this lab at your home or office on an x86 server, x86 desktop, or x86 laptop.

This hands-on lab is for application architects or system administrators who need to deploy and manage Oracle Applications. The objective of this lab is to demonstrate how Oracle VM Templates provide an easy and fast way of deploying Oracle Applications. These templates are designed to build test or production clusters that consist of any number of nodes, but by default, a two-node cluster is created.

The templates include full support for single-instance, Oracle Restart (Single-Instance High Availability [SIHA]), and Oracle Real Application Clusters (Oracle RAC) database deployments for both Oracle Database 11g Release 2 and Oracle Database 12c. The templates also support the Flex Cluster feature of Oracle Database 12c and Oracle Flex Automatic Storage Management (ASM), as well as automation for container/pluggable databases in Oracle Database 12c.

During this lab, you are going to deploy a four-node Flex Cluster (three hubs and one leaf) with a dedicated network for Oracle Flex ASM traffic. For more information about Oracle Flex ASM and Flex Cluster, see Appendix A.

Preparation

To run this lab, you will need the following:

  • An x86 machine with 16 GB of RAM and four CPU cores.
  • Any x86 operating system supported by Oracle VM VirtualBox, for example, Microsoft Windows, most Linux distributions, Oracle Solaris for x86 machines, and Apple Mac OSX (Oracle Linux 6 update 4 was used when the lab was run at Oracle OpenWorld)

Here is a list of actions you should perform before starting the lab:

  1. Check that your OS is 64 bits.
  2. Install the latest Oracle VM VirtualBox version and the Oracle VM VirtualBox Extension Pack on your x86 machine.
  3. Install the latest Java Runtime Environment (JRE) 7 (the javaws binary is needed to get the VNC console). Download the version that is appropriate for the OS on your x86 machine from http://java.com/en/download/manual.jsp.
  4. Download the latest Oracle VM VirtualBox Template for Oracle VM Manager. (You'll need to accept the license agreement first.)
  5. Download the latest Oracle VM VirtualBox Template for Oracle VM Server. (You'll need to accept the license agreement first.)
  6. Download the latest Oracle VM Templates for a single-instance or Oracle RAC deployment of Oracle Database by using the following substeps.

    Note: The following templates are used in the example in this lab:

    [Aug 2013] Single Instance & Oracle RAC 12c Release 1, including Oracle Grid Infrastructure (12.1.0.1.0) & Oracle Linux 6 Update 4 - Available from Oracle Software Delivery Cloud for 64-Bit Linux " [Media Pack: B74026-01: Oracle VM Templates for Oracle Database Media Pack for x86 (64 bit)

    1. Go to the Oracle VM Templates for Oracle Database web page, click the Oracle Software Delivery Cloud link provided for downloading the templates, and then sign in to the Oracle Software Delivery Cloud.
    2. Once logged in, select the two checkboxes to acknowledge the license and export restriction agreements, and click Continue.
    3. In the Media Pack Search page, ensure Oracle VM Templates is selected in the Select a Product Pack list and x86 64 bit is selected in the Platform list, and then click Go.
    4. On the next page, select Oracle VM Templates for Oracle Database Media Pack for x86 (64 bit), which is part number B74026-01, and then click Continue.
    5. On the next page, download both files: the first (part 1 of 2) contains the OS files, and the second (part 2 of 2) contains the Oracle Database installation files. Each is a .zip file that contains a .tgz or .tbz file (which you will later import using Oracle VM Manager), for example:

      • OVM_OL6U4_X86_64_12101DBRAC_PVM-1of2.tbz
      • OVM_OL6U4_X86_64_12101DBRAC_PVM-2of2.tbz
  7. Download the latest Deploycluster tool. (You'll need to accept the license agreement first.)

What You Will Learn: Summary of Lab Steps

In this lab, you will learn how Oracle VM works and how to execute the following steps:

  1. Install Oracle VM Manager in Oracle VM VirtualBox.
  2. Install Oracle VM Server in Oracle VM VirtualBox.
  3. Start both servers (that is, the Oracle VM VirtualBox virtual machines).
  4. Connect to the Oracle VM Manager console.
  5. Discover the Oracle VM Server.
  6. Create and configure a virtual machine (VM) network.
  7. Create virtual network interface cards (VNICs).
  8. Create a server pool.
  9. Create a storage repository.
  10. Import the Oracle VM templates for Oracle Database and Oracle RAC.
  11. Create four VMs from an Oracle RAC 12c template.
  12. Create an Oracle ASM disk and map it to each VM using the Oracle VM command-line interface (CLI).
  13. Use the Deploycluster tool to start and configure all four Oracle VM virtual machines as Oracle RAC nodes.

Global Architecture

Figure 1 shows all the components used in this lab (Oracle VM VirtualBox and Oracle VM virtual machines) with their names and configuration (memory, IP addresses, networks, and so on).

Lab Architecture Diagram

Figure 1. Lab architecture diagram.

Detailed Instructions for the Lab

Install Oracle VM Manager in Oracle VM VirtualBox

The Oracle VM VirtualBox Template for Oracle VM Manager you downloaded contains the following software components:

  • Oracle Linux 5 update 9 with the Unbreakable Enterprise Kernel (2.6.39)
  • Oracle VM Manager 3.2.4
  • Oracle WebLogic Server 10.3
  • MySQL 5.5

Perform the following steps:

  1. In the Oracle VM VirtualBox console, import the VM from the Oracle VM Manager template:

    1. Select File->Import Appliance.
    2. Select the OracleVMManager.3.2.4-b524.ova file.
    3. Click Next.
    4. Change the name of the Virtual System 1 from Oracle VM Manager 3.2.4-b524 to HOL9982_ovm_mgr.
    5. Click Import.
  2. Modify the settings of the virtual machine HOL9982_ovm_mgr:

    1. Configure the network by going to Network and clicking the Adapter 1 tab. From the Attached to list, select Host-only Adapter.
    2. Leave Amount of Memory as 4096 MB (you need at least 3072 MB).
  3. Start the virtual machine HOL9982_ovm_mgr.
  4. Configure the virtual machine (in the VM console):

    1. Set the root password to ovsroot.
    2. Configure the network:

      • IP address: 192.168.56.3
      • Netmask: 255.255.255.0
      • Gateway: 192.168.56.1
      • DNS server: 192.168.56.1 (we will not use DNS, but we have to give an IP address here)
      • Hostname: ovm-mgr.oow.com
    3. Wait for the boot to complete.

Install Oracle VM Server in Oracle VM VirtualBox

Perform the following steps:

  1. In the Oracle VM VirtualBox console, import the VM from the Oracle VM Server template you downloaded:

    1. Select File->Import Appliance.
    2. Select the OracleVMServer.3.2.4-b525.ova file.
    3. Click Next.
    4. Change the name of the Virtual System 1 from Oracle VM Server 3.2.4-b525 to HOL9982_ovm_srv.
    5. Click Import.
  2. Modify the settings of the virtual machine HOL9982_ovm_srv:

    1. Set Amount of Memory to 10240 MB (System).
    2. Configure the network by going to Network and clicking the Adapter 1 tab. From the Attached to list, select Host-only Adapter.
    3. Configure the next network by going to Network and clicking the Adapter 2 tab. Click Enable and from the Attached to list, select Host-only Network.
    4. Configure the next network by going to Network and clicking the Adapter 3 tab. Click Enable and from the Attached to list, select Host-only Network.
    5. Select the VM named HOL992_ovm_srv.
    6. Click the gear icon icon.
    7. Go to Storage, and under Sata Controller, remove the existing disk OracleVMServer3.2.4-b525-disk2.vmdk.
    8. Click the add disk icon icon to add a new disk.
    9. Click Create new disk, select VMDK, select Dynamically allocated, name the disk OVMRepo, and specify its size as 50Gb. Click Create and then click OK.
  3. Start the virtual machine HOL9982_ovm_srv.
  4. Configure the virtual machine (in the VM console):

    1. Configure the network:

      • IP address: 192.168.56.2
      • Netmask: 255.255.255.0
      • Gateway: 192.168.56.1
      • DNS server: 192.168.56.1 (we will not use DNS, but we have to give an IP address here)
      • Hostname: ovm-srv.oow.com
    2. Wait for the boot to complete.
  5. If your x86 machine runs a UNIX, Linux, or Mac operating system, open a terminal window and connect to the VM using ssh. (You can use PuTTY if your machine runs Microsoft Windows.)

    Note: The password is ovsroot.

    $ ssh root@192.168.56.2
    
  6. Add the following line to the /etc/hosts file:

    192.168.56.3       ovm-mgr.oow.com       ovm-mgr
    

Start Both Servers (Virtual Machines)

As previously explained, we will use Oracle VM VirtualBox to host the two servers (Oracle VM Server and Oracle VM Manager) on a single x86 machine.

Both VMs should have been started in the previous sections when you installed Oracle VM Server and Oracle VM Manager; if not, please start both VMs now.

Perform the following steps:

  1. Wait for the Oracle Linux screen on the HOL9982_ovm_mgr VM (see Figure 2).
  2. Wait for the Oracle VM Server screen on the HOL9982_ovm_srv VM (see Figure 2).
  3. Open a terminal window and check that you are able to ping both VMs using the following IP addresses:

    • HOL9982_ovm_mgr: 192.168.56.3
    • HOL9982_ovm_srv: 192.168.56.2
    Check that you can ping both VMs.

    Figure 2. Check that you can ping both VMs.

  4. Once both VMs are started and you have verified that you can ping them, do the following:

    1. Minimize the main Oracle VM VirtualBox window.
    2. Minimize the Oracle VM Manager window.
    3. Minimize the Oracle VM Server window.

All the next steps will be done from your x86 machine's native OS.

Connect to the Oracle VM Manager Console

Perform the following steps:

  1. On your x86 physical machine's desktop, open a Firefox browser and connect to the Oracle VM Manager console using URL https://192.168.56.3:7002/ovm/console.

    You should get the following login screen:

     Oracle VM Manager login screen

    Figure 3. Oracle VM Manager login screen.

  2. Log in using the following credentials:

    • Login: admin (default Oracle VM Manager administrator)
    • Password: Welcome1

Discover the Oracle VM Server

When you add Oracle VM Servers to your Oracle VM Manager environment, this process is known as discovering Oracle VM Servers. The first thing you should do to set up your virtualization environment is to discover your Oracle VM Servers.

When an Oracle VM Server is discovered, it contains some basic information about itself and about any immediate connectivity to a shared SAN, but it is considered to be in an unconfigured state. Any storage attached to the Oracle VM Server is also discovered. Depending on your hardware and networking configuration, external storage might be automatically detected during discovery of the Oracle VM Servers.

In this lab, our Oracle VM Server does not have any shared storage; it has only local Oracle Cluster File System 2 (OCFS2) storage that is discovered during discovery of the server.

Perform the following steps:

  1. Click the Servers and VMs tab, if it is not already selected.
  2. Click the stoplight icon icon in the toolbar.
  3. Enter the Oracle VM Agent password (ovsroot) and the IP address (192.168.56.2) for the Oracle VM Server(s) to be discovered.
  4. Click OK.

    Discovering Oracle VM Servers

    Figure 4. Discovering Oracle VM Servers.

    The Oracle VM Servers are discovered and added to the Unassigned Servers folder in the Servers and VMs tab. The name displayed for a discovered Oracle VM Server is the assigned DNS name, not the IP address:

    Discovered servers added to the Unassigned Servers folder

    Figure 5. Discovered servers added to the Unassigned Servers folder.

The next step usually is the discovery of a storage array. However, in this lab, we have only a local OCFS2 disk attached to the Oracle VM Server, and it is discovered during the discovery of the Oracle VM Server. So our next step is to configure the network.

Configure the Network

Oracle VM has a number of network functions: Server Management, Live Migrate, Cluster Heartbeat, Virtual Machine, and Storage. The Server Management, Live Migrate, and Cluster Heartbeat roles were automatically assigned to the management network (192.168.56.0) when you discovered the Oracle VM Server. The Virtual Machine and Storage roles are not automatically created, and you must manually create these. The Storage role is required only for iSCSI-based storage, so for the purposes of the local storage used in this lab, it is not required.

In this lab, you will assign the default management network (PubNet) the Virtual Machine role, and you will then create two new networks:

  • PrivNet with the Virtual Machine role, which will be used for Oracle RAC private network traffic
  • ASMNet with the Virtual Machine role, which will be used for ASM traffic (the Oracle Flex ASM feature of Oracle Database 12c)

Configure the Default Management Network

Perform the following steps:

  1. Click the Networking tab, and then click the Networks subtab.
  2. Select the existing Management Network, 192.168.56.0.
  3. To edit the management network, click pencil icon and then do the following:

    1. Change the network's name to PubNet.
    2. Add the Virtual Machine role to PubNet and click Next.
    Editing the default management network

    Figure 6. Editing the default management network.

  4. Check that ovm-srv.oow.com is in the Selected Server(s) column and click Next.
  5. Check that ovm-srv.oow.com bond0 is in the Selected Ports column and click Next.
  6. Select None for both VLAN Group and VLAN segment and click Next.
  7. Make no changes to the IP addresses, and then click Finish.

Now we are going to create a new network, PrivNet, which is going to be used for Oracle RAC traffic.

Create the Private Network

Perform the following steps:

  1. Click the Networking tab, and then click the Networks subtab.
  2. Click the plus icon icon to start the Create Network wizard.
  3. Select Create a network with bonds/ports only and click Next.

    Creating the private network

    Figure 7. Creating the private network.

  4. Specify the new network's name as PrivNet.
  5. Assign the Virtual Machine role to this new network.

    Specifying the network's name and role

    Figure 8. Specifying the network's name and role.

  6. Click Next.
  7. At the Select Servers step of the wizard, select ovm-srv.oow.com to be included in the new network by moving it to the Selected Server(s) list on the right.

    Selecting servers for the new network

    Figure 9. Selecting servers for the new network.

  8. Click Next.
  9. At the Select Ports step of the wizard, select ovm-srv.oow eth1 and move it to the Selected Ports list on the right.

    Selecting ports for the new network

    Figure 10. Selecting ports for the new network.

  10. Click Next.
  11. At the Configure IP Addresses step of the wizard, set up the network bonding.

    You can use static IP addresses or DHCP, or you can have no IP addresses assigned to the network. In this lab, we do not need to use IP addresses because we are creating a network for use only by virtual machines, so leave Addressing set to None.

  12. Click Finish to create the PrivNet network.

    Finishing the configuration of the private network

    Figure 11. Finishing the configuration of the private network.

Create the ASM Traffic Network (for Oracle Flex ASM)

To create this network, repeat the steps you performed in the previous section to create the private network, except use the following values instead:

  • In Step 4, specify the new network's name as ASMNet.
  • In Step 9, select ovm-srv.oow eth2 and move it to the Selected Ports list on the right.

Now we are going to create some VNICs (virtual network interface cards).

Create Virtual Network Interface Cards

The VNIC Manager creates VNICs, which are used by VMs as network cards. You create virtual network interfaces by defining a range of MAC addresses to use for each VNIC. Each MAC address corresponds to a single VNIC, which is used by a VM. Before you can create a VM that has the ability to connect to the network, you should generate a set of VNICs. You need to perform this step only when you run out of VNICs, not each time you want to create a VM.

In this lab, 20 VNICs are already present, so you will create 20 additional VNICs.

Perform the following steps:

  1. Click the Networking tab, and then click the Virtual NICs subtab.

    The Create Virtual NICs page is displayed.

  2. Click Auto Fill to get the next available MAC addresses and click Create.

    Creating more VNICs

    Figure 12. Creating more VNICs.

The next step will be to create a server pool.

Create a Server Pool

A server pool contains a group of Oracle VM Servers, which as a group perform VM management tasks, such as ensuring high availability (HA), implementing resource and power management policies, and providing access to networks, storage, and repositories.

In this lab, we will create a server pool with a single Oracle VM Server inside.

Perform the following steps:

  1. Click the Servers and VMs tab.
  2. Click the stoplight icon icon in the toolbar.

    The Create a Server Pool wizard is displayed.

  3. Enter the following server pool information:

    1. For Server Pool Name, enter mypool.
    2. For Virtual IP Address for the Pool, enter 192.168.56.4.
    3. Deselect Clustered Server Pool.
    Creating and configuring a server pool

    Figure 13. Creating and configuring a server pool.

  4. Click Next and add the Oracle VM Server to the server pool by moving ovm-srv.oow.com to the Selected Server(s) list on the right.

    Adding the Oracle VM Server to the server pool

    Figure 14. Adding the Oracle VM Server to the server pool.

  5. Click Finish and verify that Oracle VM Server ovm-srv.oow.com is now part of server pool mypool.

    Verifying that the Oracle VM Server is part of the server pool

    Figure 15. Verifying that the Oracle VM Server is part of the server pool.

You will now create a storage repository.

Create a Storage Repository

A storage repository is where Oracle VM resources may reside. Resources include VMs, templates for VM creation, VM assemblies, ISO files (DVD image files), shared virtual disks, and so on.

Perform the following steps:

  1. In the Repositories tab, click plus icon, and then in the Create a Data Repository wizard, enter OVMRepo for Repository Name and mypool for Server Pool Name.
  2. Select the 50 GB hard disk (SATA_VBOX_HARDDISK).

    Creating a storage repository

    Figure 16. Creating a storage repository.

  3. Present repository OVMRepo to the Oracle VM Server by moving ovm-srv.oow.com to the Present to Server(s) list on the right:

    Presenting the repository to Oracle VM Server.

    Figure 17. Presenting the repository to Oracle VM Server.

The repository has now been created and presented to the Oracle VM Server ovm-srv.oow.com

Import the Oracle VM Templates You Downloaded

The Oracle VM templates can be used to build an Oracle Database 12c Release 1 or Oracle Database 11g Release 2 single-instance database or a cluster that has any number of nodes—and includes Oracle Clusterware, Oracle Database, and Oracle Automatic Storage Management (ASM), patched to the latest, recommended patches. The environment comes loaded with Swingbench, Oracle Cluster Health Monitor, Oracle OS Watcher, ASMLib RPMs, and other tools.

During the installation process, a single-instance Oracle Database or an Oracle RAC database instance is created on all nodes by default. Any number of nodes or instances can be added or removed from the cluster using a single command.

Perform the following steps:

  1. Log in as root on Oracle VM Manager:

    ssh root@192.168.56.3
    
  2. Change to the /var/www/html directory and create a directory named Files.
  3. Copy the two Oracle Database template files you downloaded earlier (*.tgz or *.tbz) to /var/www/html/Files (you will need Filezilla or some other FTP client).
  4. From the Oracle VM Manager GUI, import the template files by providing both URLs for the same import session. The import process will take several minutes; be patient.

    Importing the template files

    Figure 18. Importing the template files.

  5. Verify that the template files are present in the repository:

    Verify that the template files are present in the repository

    Figure 19. Verify that the template files are present in the repository.

Now that you have a repository with an Oracle RAC 12c template inside, you are going to create four VMs from this template.

Clone Four Virtual Machines from the Template

The goal of this lab is to configure a four-node Oracle RAC cluster, so you need to create four virtual machines. Before creating those virtual machines you are going to edit the template and match the template with the network configuration you created in the "Configure the Network" section.

Perform the following steps:

  1. In the Repositories tab, select the OVM_OL6u4...DBRAC template and click pencil icon to edit the template.
  2. In the Edit VM Template wizard, select the Networks tab and assign PubNet, PrivNet, and ASMNet to the Selected Ethernet Networks list (keep the order: PubNet first, PrivNet second, and ASMNet third). Any other network present in the Selected Ethernet Networks list can be removed.

    Selecting the networks

    Figure 20. Selecting the networks.

  3. Click OK.
  4. Click the Servers and VMs tab.
  5. Click the monitor plus icon icon in the toolbar.
  6. From the Create Virtual Machine wizard, do the following to clone four VMs from the template:

    1. Select Clone from an existing VM Template.
    2. Set Clone Count to 4.
    3. Set VM Name to rac.
    Cloning the VMs from the template

    Figure 21. Cloning the VMs from the template.

  7. Click Finish.
  8. In the Servers and VMs tab, in the Perspective list, select Virtual Machines; you should see four VMs: rac.0, rac.1, rac.2, and rac.3.

    Verifying the cloned VMs

    Figure 22. Verifying the cloned VMs.

You can now create a shared disk for the future Oracle ASM configuration.

Create a Shared Disk for the Oracle ASM Configuration

Oracle ASM is a volume manager and a file system for Oracle Database files, which supports single-instance Oracle Database and Oracle RAC configurations. Oracle ASM is Oracle's recommended storage management solution that provides an alternative to conventional volume managers, file systems, and raw devices.

Oracle ASM uses disk groups to store datafiles; an Oracle ASM disk group is a collection of disks that Oracle ASM manages as a unit. Within a disk group, Oracle ASM exposes a file system interface for Oracle Database files. The content of files that are stored in a disk group is evenly distributed, or striped, to eliminate hot spots and to provide uniform performance across the disks. The performance is comparable to the performance of raw devices.

In this section, we will create only one Oracle ASM disk. In a real-world scenario, you would have more than one Oracle ASM disk. Although those disks could be created using the Oracle VM Manager GUI, the process would be very repetitive. Instead, you are going to use CLI commands to create and map our one Oracle ASM disk to the rac.0, rac.1, and rac.2 VMs. Because the rac.3 VM will be the leaf node, we don't give it access to the shared disk.

Oracle VM CLI commands can be scripted, which is a more user friendly way to automatically repeat commands. The CLI is included in the Oracle VM Manager installation.

Perform the following steps:

  1. If your x86 machine runs a UNIX or Linux OS, open a terminal window and connect to ovm-mgr.oow.com using ssh. (If your machine runs Microsoft Windows, use PuTTY instead, as shown in Figure 23.)

    For example, the following Linux command uses ssh to connect to ovm-mgr.oow.com using IP address 192.168.56.3 and the credentials admin/Welcome1 on port 10000:

    ssh admin@192.168.56.3 -p 10000
    

    Connecting to the VM

    Figure 23. Connecting to the VM.

  2. Create the shared disk using the create VirtualDisk command (see Figure 24):

    create VirtualDisk name=racasm1 size=5 sparse=yes shareable=yes on Repository name=OVMRepo
    
  3. Map the shared disk to each VM, as shown in Figure 24:

    create vmDiskMapping slot=2 storageDevice=racasm1 name=racasm1 on vm name=rac.0
    create vmDiskMapping slot=2 storageDevice=racasm1 name=racasm1 on vm name=rac.1
    create vmDiskMapping slot=2 storageDevice=racasm1 name=racasm1 on vm name=rac.2
    

    Mapping the shared disk to the VMs

    Figure 24. Mapping the shared disk to the VMs.

  4. Verify that the racasm1 disk is present and assigned to VMs rac.0, rac.1, and rac.2:

    Verifying the shared disk

    Figure 25. Verifying the shared disk.

You will now be able to start and configure all the VMs using the Deploycluster tool.

Use Deploycluster to Start the Virtual Machines as Oracle RAC Nodes

Oracle VM 3 users can benefit from the Deploycluster tool, which now fully supports single-instance, Oracle Restart (SIHA), and Oracle RAC database deployments. The tool leverages the Oracle VM 3 API so that when given a set of VMs, it quickly boots them up, sends the needed configuration details, and automatically initiates a single-instance or cluster build, without requiring you to log in to Dom0, to any of the involved VMs, or to Oracle VM Manager.

In Oracle RAC deployments, there are two ways to deploy the templates (hence, there are two separate documents):

Production environments may not do the following:

  • Run more than one VM belonging to same cluster on the same Oracle VM Server (Dom0).
  • Use files in Dom0 to emulate shared disks for the Oracle RAC nodes/VMs.

In this lab, you are going to deploy the template in test mode.

Create a netconfig.ini File for Deployment

We will now copy the Deploycluster tool (DBRACOVM-Deploycluster-tool.zip) to Oracle VM Manager and create a netconfig.ini file, which will summarize all the network information for our four Oracle RAC nodes (for example, public IP addresses, private IP addresses, and so on). This file will be used during the deployment process.

Note: The goal of this lab is to show how to create a four-node cluster using Oracle VM with Flex Cluster and Oracle Flex ASM, not to actually run a four-node cluster. Because of the limited resources we have on the x86 machine, the build for this four-node cluster will not finish. By comparison, a similar deployment on a bare-metal/Oracle VM environment with adequate resources would take around 30 to 40 minutes.

Perform the following steps:

  1. If your x86 machine runs a UNIX or Linux OS, open a terminal window and connect to ovm-mgr.oow.com using ssh. (If your machine runs Microsoft Windows, use PuTTY instead.)

    For example, the following Linux command uses ssh to connect to ovm-mgr.oow.com using the root/ovsroot credentials:

    ssh root@192.168.56.3
    
  2. Create a directory called /SoftOracle and change to this directory:

    mkdir SoftOracle
    cd /SoftOracle
    
  3. Using an FTP client, copy DBRACOVM-Deploycluster-tool.zip to the /SoftOracle directory and then unzip the file.
  4. Change to the /SoftOracle/deploycluster/utils directory and create the netconfig12cRAC4node.ini file by copying all the lines from only the left column of Table 1 and pasting them into the file:

    Table 1. Content for the netconfig12cRAC4node.ini file.
    # Node specific information
    NODE1=rac0
    NODE1IP=192.168.56.10
    NODE1PRIV=rac0-priv
    NODE1PRIVIP=10.10.10.230
    NODE1VIP=rac0-vip
    NODE1VIPIP=192.168.56.230
    NODE1ROLE=HUB
    
    NODE2=rac1
    NODE2IP=192.168.56.11
    NODE2PRIV=rac1-priv
    NODE2PRIVIP=10.10.10.231
    NODE2VIP=rac1-vip
    NODE2VIPIP=192.168.56.231
    NODE2ROLE=HUB
    
    NODE3=rac2
    NODE3IP=192.168.56.12
    NODE3PRIV=rac2-priv
    NODE3PRIVIP=10.10.10.232
    NODE3VIP=rac2-vip
    NODE3VIPIP=192.168.56.232
    NODE3ROLE=HUB
    
    NODE4=rac3
    NODE4IP=192.168.56.13
    NODE4PRIV=rac3-priv
    NODE4PRIVIP=10.10.10.233
    #NODE4VIP=rac3-vip
    #NODE4VIPIP=192.168.56.233
    NODE4ROLE=LEAF
    
    # Common data
    PUBADAP=eth0
    PUBMASK=255.255.255.0
    PUBGW=192.168.56.1
    PRIVADAP=eth1
    PRIVMASK=255.255.255.0
    RACCLUSTERNAME=oow12c
    DOMAINNAME=localdomain  # May be blank
    DNSIP=  # Starting from 2013 Templates allows multi value
    # Device used to transfer network information to second node
    # in interview mode
    NETCONFIG_DEV=/dev/xvdc
    # 11gR2 specific data
    SCANNAME=oow12c-scan
    SCANIP=192.168.56.235
    GNS_ADDRESS=192.168.56.236
    
    # 12c Flex parameters (uncomment to take effect)
    FLEX_CLUSTER=yes  # If 'yes' implies Flex ASM as well
    FLEX_ASM=yes
    ASMADAP=eth2  # Must be different than private/public
    ASMMASK=255.255.255.0
    NODE1ASMIP=10.11.0.230
    NODE2ASMIP=10.11.0.231
    NODE3ASMIP=10.11.0.232
    NODE4ASMIP=10.11.0.233
    
    # Single Instance (description in params.ini) 
    # CLONE_SINGLEINSTANCE=yes  # Setup Single Instance
    #CLONE_SINGLEINSTANCE_HA=yes  # Setup Single Instance/HA (Oracle Restart)
    
    # Node 1 name
    # Node 1 IP address
    # Private IP name for RAC
    # Private IP for RAC
    # Virtual IP name for RAC
    # Virtual IP for RAC
    # ROLE NODE (HUB or LEAF)
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    # Public interface is eth0
    
    
    # Private interface is eth1
    
    # Cluster name
    
    
    
    
    
    
    # SCAN name
    # SCAN IP address
    # Grid Naming Service IP address
    
    
    # Building a FLEX Cluster
    
    # FLEX ASM require dedicated net
    
    
    
    
    
    
    
    
    

Create a params12c.ini File for Deployment

Now we will create a params12c.ini file, which will be sent to all VMs, allowing more control of cluster installation options, such as the ASM redundancy level, database name, session ID (SID), ports, and so on. If this file is supplied, it overwrites the params.ini file inside the guests at path /u01/racovm/params.ini; therefore, any settings such as disk names or any specified discovery string should match the settings that the VMs are configured with.

Perform the following steps:

  1. Change to the /SoftOracle/deploycluster/utils directory.
  2. Create a params12c.ini file by copying all the lines from Table 2 and pasting them into the file:

    Table 2. Content for the params12c.ini file.
    #
    #/* Copyright 2013,  Oracle. All rights reserved. */
    #
    #
    # WRITTEN BY: Oracle.
    #  v1.0: Jul-2013 Creation
    #
    #
    # Oracle DB/RAC 12c OneCommand for Oracle VM - Generic configuration file
    # For Single Instance, Single Instance HA (Oracle Restart) and Oracle RAC
    #
    #############################################
    #
    # Generic Parameters
    #
    # NOTE: The first section holds more advanced parameters that
    #       should be modified by advanced users or if instructed by Oracle.
    #
    # See further down this file for the basic user modifiable parameters.
    #
    ##############################################
    #
    # Temp directory (for OUI), optional
    # Default: /tmp
    TMPDIR="/tmp"
    #
    # Progress logfile location
    # Default: $TMPDIR/progress-racovm.out
    LOGFILE="$TMPDIR/progress-racovm.out"
    #
    # Must begin with a "+", see "man 1 date" for valid date formats, optional.
    # Default: "+%Y-%m-%d %T"
    LOGFILE_DATE_FORMAT=""
    #
    # Should 'clone.pl' be used (default no) or direct 'attach home' (default yes)
    # to activate the Grid and RAC homes.
    # Attach is possible in the VM since all relinking was done already
    # Certain changes may still trigger a clone/relink operation such as switching
    # from role to non-role separation.
    # Default: yes
    CLONE_ATTACH_DBHOME=yes
    CLONE_ATTACH_GIHOME=yes
    #
    # Should a re-link be done on the Grid and RAC homes. Default is no,
    # since the software was relinked in VM already. Setting it to yes
    # forces a relink on both homes, and overrides the clone/attach option
    # above by forcing clone operation (clone.pl)
    # Default: no
    CLONE_RELINK=no
    #
    # Should a re-link be done on the Grid and RAC homes in case of a major
    # OS change; Default is yes.  In case the homes are attached to a different
    # major OS than they were linked against, a relink will be automatically
    # performed.  For example, if the homes were linked on OL5 and then used
    # with an OL6 OS, or vice versa, a relink will be performed. To disable
    # this automated relinking during install (cloning step), set this
    # value to no (not recommended)
    # Default: yes
    CLONE_RELINK_ON_MAJOR_OS_CHANGE=yes
    #
    # The root of the oracle install must be an absolute path starting with a /
    # Default: /u01/app
    RACROOT="/u01/app"
    #
    # The location of the Oracle Inventory
    # Default: $RACROOT/oraInventory
    RACINVENTORYLOC="${RACROOT}/oraInventory"
    #
    # The location of the SOFTWARE base
    # In role separated configuration GIBASE may be defined to set the location
    # of the Grid home which defaults to $RACROOT/$GRIDOWNER.
    # Default: $RACROOT/$RACOWNER
    RACBASE="${RACROOT}/oracle"
    #
    # The location of the Grid home, must be set in RAC or Single Instance HA deployments
    # Default: $RACROOT/12.1.0/grid
    GIHOME="${RACROOT}/12.1.0/grid"
    #
    # The location of the DB RAC home, must be set in non-Clusterware only deployments
    # Default: ${RACBASE}/product/12.1.0/dbhome_1
    DBHOME="${RACBASE}/product/12.1.0/dbhome_1"
    #
    # The disk string used to discover ASM disks, it should cover all disks
    # on all nodes, even if their physical names differ. It can also hold
    # ASMLib syntax, e.g. ORCL:VOL*, and have as many elements as needed
    # separated by space, tab or comma.
    # Do not remove the "set -/+o noglob" options below, they are required
    # so that discovery string don't expand on assignment.
    set -o noglob
    RACASMDISKSTRING="/dev/xvdc1"
    set +o noglob
    #
    # Provide list of devices or actual partitions to use. If actual
    # partition number is specified no partitioning will be done, otherwise specify
    # top level device name and the disk will automatically be partitioned with
    # one partition using 'parted'. For example, if /dev/xvdh4 is listed
    # below it will be used as is, if it does not exist an error will be raised.
    # However, if /dev/xvdh is listed it will be automatically partitioned
    # and /dev/xvdh1 will be used.
    # Minimum of 5 devices or partitions are recommended (see ASM_MIN_DISKS).
    ALLDISKS="/dev/xvdc"
    #
    # Provide list of ASMLib disks to use.  Can be either "diskname" or
    # "ORCL:diskname".  They must be manually configured in ASMLib by
    # mapping them to correct block device (this part is not yet automated).
    # If you include any disks here they should also be included
    # in RACASMDISKSTRING setting above (discovery string).
    ALLDISKS_ASMLIB=""
    #
    # By default 5 disks for ASM are recommended to provide higher redundancy
    # for OCR/Voting files. If for some reason you want to use less
    # disks, then uncomment ASM_MIN_DISKS below and set to the new minimum.
    # Make needed adjustments in ALLDISKS and/or ALLDISKS_ASMLIB above.
    # Default: 5
    ASM_MIN_DISKS=1
    #
    # By default, whole disks specified in ALLDISKS will be partitioned with
    # one partition. If you prefer not to partition and use whole disk, set
    # PARTITION_WHOLE_DISKS to no. Keep in mind that if at a later time
    # someone will repartition the disk, data may be lost. Probably better
    # to leave it as "yes" and signal it's used by having a partition created.
    # Default: yes
    PARTITION_WHOLE_DISKS=yes
    #
    # By default, disk *names* are assumed to exist with same name on all nodes, i.e
    # all nodes will have /dev/xvdc, /dev/xvdd, etc.  It doesn't mean that the *ordering*
    # is also identical, i.e. xvdc can really be xvdd on the other node.
    # If such persistent naming (not ordering) is not the case, i.e node1 has
    # xvdc,xvdd but node2 calls them: xvdn,xvdm then PERSISTENT_DISKNAMES should be
    # set to NO.  In the case where disks are named differently on each node, a
    # stamping operation should take place (writing to second sector on disk)
    # to verify if all nodes see all disks.
    # Stamping only happens on the node the build is running from, and backup
    # is taken to $TMPDIR/StampDisk-backup-diskname.dd. Remote nodes read the stamped
    # data and if all disks are discovered on all nodes the disk configuration continues.
    # Default: yes
    PERSISTENT_DISKNAMES=yes
    #
    # This parameter decides whether disk stamping takes place or not to discover and verify
    # that all nodes see all disks.  Stamping is the only way to know 100% that the disks
    # are actually the same ones on all nodes before installation begins.
    # The master node writes a unique uuid to each disk on the second sector of the disk,
    # then remote nodes read and discover all disks.
    # If you prefer not to stamp the disks, set DISCOVER_VERIFY_REMOTE_DISKS_BY_STAMPING to
    # no. However, in that case, PERSISTENT_DISKNAMES must be set to "yes", otherwise, with
    # both parameters set to "no" there is no way to calculate the remote disk names.
    # The default for stamping is "yes" since in Virtual machine environments, scsi_id(8)
    # doesn't return data for disks.
    # Default: yes
    DISCOVER_VERIFY_REMOTE_DISKS_BY_STAMPING=yes
    #
    # Permissions and ownership files, EL4 uses PERMISSIONFILE, EL5 uses UDEVFILE
    UDEVFILE="/etc/udev/rules.d/99-oracle.rules"
    PERMISSIONFILE="/etc/udev/permissions.d/10-oracle.permissions"
    #
    # Disk permissions to be set on ASM disks use if want to override the below default
    # Default: "660" (owner+group: read+write)
    #  It may be possible in Non-role separation to use "640" (owner: read+write, group: read)
    #  however, that is not recommended since if a new database OS user
    #  is added at a later time in the future, it will not be able to write to the disks.
    #DISKPERMISSIONS="660"
    #
    # ASM's minimum allocation unit (au_size) for objects/files/segments/extents of the first
    # diskgroup, in some cases increasing to higher values may help performance (at the
    # potential of a bit of space wasting). Legal values are 1,2,4,8,16,32 and 64 MB.
    # Not recommended to go over 8MB. Currently if initial diskgroup holds OCR/Voting then it's
    # maximum possible au_size is 16MB. Do not change unless you understand the topic.
    # Most releases default to 1MB (Exadata's default: 4MB)
    #RACASM_AU_SIZE=1
    #
    # Should we align the ASM disks to a 1MB boundary.
    # Default: yes
    ALIGN_PARTITIONS=yes
    #
    # Should partitioned disks use the GPT partition table
    # which supported devices larger than 2TB.
    # Default: msdos
    PARTITION_TABLE_GPT=no
    #
    # These are internal functions that check if a disk/partition is held
    # by any component. They are run in parallel on all nodes, but in sequence
    # within a node. Do not modify these unless explicitly instructed to by Oracle.
    HELDBY_FUNCTIONS=(HeldByRaid HeldByAsmlib HeldByPowerpath HeldByDeviceMapper HeldByUser HeldByFilesystem HeldBySwap)
    #
    ##### STORAGE: Filesystem: DB/RAC: (shared) filesystem
    #
    # NOTE1: To not configure ASM unset RACASMGROUPNAME
    # NOTE2: Not all operations/verification take place in a
    #        FS configuration.
    #  For example:
    #   - The mount points are not automatically created/mounted
    #   - Best effort verification is done that the correct
    #     mount options are used.
    #
    # The filesystem directory to hold Database files (control, logfile, etc.)
    # For RAC it must be a shared location (NFS, OCFS or in 12c ACFS),
    # otherwise it may be a local filesystem (e.g. ext4).
    # For NFS make sure mount options are correct as per docs
    # such as Note:359515.1
    # Default: None (Single Instance: $RACBASE/oradata)
    #FS_DATAFILE_LOCATION=/nfs/160
    #
    # Should the database be created in the FS location mentioned above.
    # If value is unset or set to no, the database is created in ASM.
    # Default: no (Single Instance: yes)
    #DATABASE_ON_FS=no
    #
    # Should the above directory be cleared from Clusterware and Database
    # files during a 'clean' or 'cleanlocal' operation.
    # Default: no
    #CLONE_CLEAN_FS_LOCATIONS=no
    #
    # Names of OCR/VOTE disks, could be in above FS Datafile location
    # or a different properly mounted (shared) filesystem location
    # Default: None
    #CLONE_OCR_DISKS=/nfs/160/ocr1,/nfs/160/ocr2,/nfs/160/ocr3
    #CLONE_VOTING_DISKS=/nfs/160/vote1,/nfs/160/vote2,/nfs/160/vote3
    #
    # Location of OCR/VOTE disks. Value of "yes" means inside ASM
    # whereas any other value means the OCR/Voting reside in CFS
    # (above locations must be supplied)
    # Default: yes
    #CLONE_OCRVOTE_IN_ASM=yes
    #
    # Should addnodes operation COPY the entire Oracle Homes to newly added
    # nodes. By default no copy is done to speed up the process, however
    # if existing cluster members have changed (patches applied) compared
    # to the newly created nodes (using the template), then a copy
    # of the Oracle Homes might be desired so that the newly added node will
    # get all the latest modifications from the current members.
    # Default: no
    CLONE_ADDNODES_COPY=no
    #
    # Should an add node operation fully clean the new node before adding
    # it to the cluster. Setting to yes means that any lingering running
    # Oracle processes on the new node are killed before the add node is
    # started as well as all logs/traces are cleared from that node.
    # Default: no
    CLONE_CLEAN_ON_ADDNODES=no
    #
    # Should a remove node operation fully clean the removed node after removing
    # it from the cluster. Setting to yes means that any lingering running
    # Oracle processes on the removed node are killed after the remove node is
    # completed as well as all logs/traces are cleared from that node.
    # Default: no
    CLONE_CLEAN_ON_REMNODES=no
    #
    # Should 'cleanlocal' request prompt for confirmation if processes are running
    # Note that a global 'clean' will fail if this is set to 'yes' and processes are running
    # this is a designed safeguard to protect environment from accidental removal.
    # Default: yes
    CLONE_CLEAN_CONFIRM_WHEN_RUNNING=yes
    #
    # Should the recommended oracle-validated or oracle-rdbms-server-*-preinstall
    # be checked for existence and dependencies during check step. If any missing
    # rpms are found user will need to use up2date or other methods to resolve dependencies
    # The RPM may be obtained from Unbreakable Linux Network or http://oss.oracle.com
    # Default: yes
    CLONE_ORACLE_PREREQ_RPM_REQD=yes
    #
    # Should the "verify" actions of the above RPM be run during buildcluster.
    # These adjust kernel parameters. In the VM everything is pre-configured hence
    # default is not to run.
    # Default: no
    CLONE_ORACLE_PREREQ_RPM_RUN=no
    #
    # By default after clusterware installation CVU (Cluster Verification Utility)
    # is executed to make sure all is well. Setting to 'yes' will skip this step.
    # Set CLONE_SKIP_CVU_POSTHAS for SIHA (Oracle Restart) environments
    # Default: no
    #CLONE_SKIP_CVU_POSTCRS=no
    #
    # Allows to skip minimum disk space checks on the
    # Oracle Homes (recommended not to skip)
    # Default: no
    CLONE_SKIP_DISKSPACE_CHECKS=no
    #
    # Allows to skip minimum memory checks (recommended not to skip)
    # Default: no
    CLONE_SKIP_MEMORYCHECKS=yes
    #
    # On systems with extreme memory limitations, e.g. VirtualBox, it may be needed
    # to disable some Clusterware components to release some memory. Workload
    # Management, Cluster Health Monitor and Cluster Verification Utility are
    # disabled if this option is set to yes.
    # This is only supported for production usage with Clusterware only installation.
    # Default: no
    CLONE_LOW_MEMORY_CONFIG=yes
    #
    # By default on systems with less than 4GB of RAM the /dev/shm will
    # automatically resize to fit the specified configuration (ASM, DB).
    # This is done because the default of 50% of RAM may not be enough. To
    # disable this functionality set CLONE_TMPFS_SHM_RESIZE_NEVER=yes.
    # Default: no
    CLONE_TMPFS_SHM_RESIZE_NEVER=no
    #
    # To disable the modification of /etc/fstab with the calculated size of
    # /dev/shm, set CLONE_TMPFS_SHM_RESIZE_MODIFY_FSTAB=no. This may mean that
    # some instances may not properly start following a system reboot.
    # Default: yes
    CLONE_TMPFS_SHM_RESIZE_MODIFY_FSTAB=yes
    #
    # Configures the Cluster Management DB (aka Cluster Health Monitor or CHM/OS)
    # Default: no
    CLONE_GRID_MANAGEMENT_DB=no
    #
    # Setting CLONE_CLUSTERWARE_ONLY to yes allows Clusterware only installation
    # any operation to create a database or reference the DB home are ignored.
    # Default: no
    #CLONE_CLUSTERWARE_ONLY=no
    #
    # As described in the 11.2.0.2 README as well as Note:1212703.1 multicasting
    # is required to run Oracle RAC starting with 11.2.0.2. If this check fails
    # review the note, and remove any firewall rules from Dom0, or re-configure
    # the switch servicing the private network to allow multicasting from all
    # nodes to all nodes.
    # Default: yes
    CLONE_MULTICAST_CHECK=yes
    #
    # Should a multicast check failure cause the build to stop. It's possible to
    # perform the multicast check, but not stop on failures.
    # Default: yes
    CLONE_MULTICAST_STOP_ON_FAILURE=yes
    #
    # List of multicast addresses to check. By default 11.2.0.2 supports
    # only 230.0.1.0, however with fix for bug 9974223 or bundle 1 and higher
    # the software also supports multicast address 244.0.0.251. If future
    # software releases will support more addresses, modify this list as needed.
    # Default: "230.0.1.0 224.0.0.251"
    CLONE_MULTICAST_ADDRESSLIST="230.0.1.0 224.0.0.251"
    #
    # The text specified in the NETCONFIG_RESOLVCONF_OPTIONS variable is written to
    # the "options" field in the /etc/resolv.conf file during initial network setup.
    # This variable can be set here in params.ini, or in netconfig.ini having the same
    # effect. It should be a space separated options as described in "man 5 resolv.conf"
    # under the "options" heading. Some useful options are:
    # "single-request-reopen attempts:x timeout:x"  x being a digit value.
    # The 'single-request-reopen' option may be helpful in some environments if
    # in-bound ssh slowness occur.
    # Note that minimal validation takes place to verify the options are correct.
    # Default: ""
    #NETCONFIG_RESOLVCONF_OPTIONS=""
    #
    ##################################################
    #
    # The second section below holds basic parameters
    #
    ##################################################
    #
    # Configures a Single Instance environment, including a database as
    # specified in BUILD_SI_DATABASE. In this mode, no Clusterware or ASM will be
    # configured, hence all related parameters (e.g. ALLDISKS) are not relevant.
    # The database must reside on a filesystem.
    # This parameter may be placed in netconfig.ini for simpler deployment.
    # Default: no
    #CLONE_SINGLEINSTANCE=no
    #
    # Configures a Single Instance/HA environment, aka Oracle Restart, including
    # a database as specified in BUILD_SI_DATABASE. The database may reside in
    # ASM (if RACASMGROUPNAME is defined), or on a filesystem.
    # This parameter may be placed in netconfig.ini for simpler deployment.
    # Default: no
    #CLONE_SINGLEINSTANCE_HA=no
    #
    # OS USERS AND GROUPS FOR ORACLE SOFTWARE
    #
    # SYNTAX for user/group are either (VAR denotes the variable names below):
    #   VAR=username:uid   OR:  VAR=username
    #                           VARID=uid
    #   VAR=groupname:gid  OR:  VAR=groupname
    #                           VARID=gid
    #
    #   If uid/gid are omitted no checks are made nor users created if need be.
    #   If uid/gid are supplied they should be numeric and not clash
    #   with existing uid/gids defined on the system already.
    #   NOTE: In RAC usernames and uid/gid must match on all cluster nodes,
    #         the verification process enforces that only if uid/gid's
    #         are given below.
    #
    # If incorrect configuration is detected, changes to users and groups are made to
    # correct them. If this is set to "no" then errors are reported
    # without an attempt to fix them.
    # (Users/groups are never dropped, only added or modified.)
    # Default: yes
    CREATE_MODIFY_USERS_GROUPS=yes
    #
    # NON-ROLE SEPARATED:
    #    No Grid user is defined and all roles are set to 'dba'
    RACOWNER=oracle:1101
    OINSTALLGROUP=oinstall:1000
    GIOSASM=dba:1031
    GIOSDBA=dba:1031
    #GIOSOPER=   # optional in 12c
    DBOSDBA=dba:1031
    #DBOSOPER=   # optional in 12c
    #
    # ROLE SEPARATION: (uncomment lines below)
    #    See Note:1092213.1
    #    (Numeric changes made to uid/gid to reduce the footprint and possible clashes
    #     with existing users/groups)
    #
    ##GRIDOWNER=grid:1100
    ##RACOWNER=oracle:1101
    ##OINSTALLGROUP=oinstall:1000
    ##GIOSASM=asmadmin:1020
    ##GIOSDBA=asmdba:1021
    ##GIOSOPER=   # optional in 12c
    ##DBOSDBA=dba:1031
    ##DBOSOPER=   # optional in 12c
    ## New in 12c are these 3 roles, if unset, they default to "DBOSDBA"
    ##DBOSBACKUPDBA=dba:1031
    ##DBOSDGDBA=dba:1031
    ##DBOSKMDBA=dba:1031
    #
    # The name for the Grid home in the inventory
    # Default: OraGrid12c
    #GIHOMENAME="OraGrid12c"
    #
    # The name for the DB/RAC home in the inventory
    # Default: OraRAC12c (Single Instance: OraDB12c)
    #DBHOMENAME="OraRAC12c"
    #
    # The name of the ASM diskgroup, default "DATA"
    # If unset ASM will not be configured (see filesystem section above)
    # Default: DATA
    RACASMGROUPNAME="DATA"
    #
    # The ASM Redundancy for the diskgroup above
    # Valid values are EXTERNAL, NORMAL or HIGH
    # Default: NORMAL (if unset)
    RACASMREDUNDANCY="EXTERNAL"
    #
    # Allows running the Clusterware with a different timezone than the system's timezone.
    # If CLONE_CLUSTERWARE_TIMEZONE is not set, the Clusterware Timezone will
    # be set to the system's timezone of the node running the build.  System timezone is
    # defined in /etc/sysconfig/clock (ZONE variable), if not defined or file missing
    # comparison of /etc/localtime file is made against the system's timezone database in
    # /usr/share/zoneinfo, if no match or /etc/localtime is missing GMT is used. If you
    # want to override the above logic, simply set CLONE_CLUSTERWARE_TIMEZONE to desired
    # timezone. Note that a complete timezone is needed, e.g. "PST" or "EDT" is not enough
    # needs to be full timezone spec, e.g. "PST8PDT" or "America/New_York".
    # This variable is only honored in 11.2.0.2 or above
    # Default: OS
    #CLONE_CLUSTERWARE_TIMEZONE="America/Los_Angeles"
    #
    # Create an ACFS volume?
    # Default: no
    ACFS_CREATE_FILESYSTEM=no
    #
    # If ACFS volume is to be created, this is the mount point.
    # It will automatically get created on all nodes.
    # Default: /myacfs
    ACFS_MOUNTPOINT="/myacfs"
    #
    # Name of ACFS volume to optionally create.
    # Default: MYACFS
    ACFS_VOLNAME="MYACFS"
    #
    # Size of ACFS volume in GigaBytes.
    # Default: 3
    ACFS_VOLSIZE_GB="3"
    #
    # NOTE: In the OVM3 enhanced RAC Templates when using deploycluster
    # tool (outside of the VMs). The correct and secure way to transfer/set the
    # passwords is to remove them from this file and use the -P (--params)
    # flag to transfer this params.ini during deploy operation, in which
    # case the passwords will be prompted, and sent to all VMs in a secure way.
    # The password that will be set for the ASM and RAC databases
    # as well as EM DB Console and the oracle OS user.
    # If not defined here they will be prompted for (only once)
    # at the start of the build. Required to be set here or environment
    # for silent mode.
    # Use single quote to prevent shell parsing of special characters.
    RACPASSWORD='oracle'
    GRIDPASSWORD='oracle'
    #
    # Password for 'root' user. If not defined here it will be prompted
    # for (only once) at the start of the build.
    # Assumed to be same on both nodes and required to be set here or
    # environment for silent mode.
    # Use single quote to prevent shell parsing of special characters.
    ROOTUSERPASSWORD='ovsroot'
    #
    # Build Database? The BUILD_RAC_DATABASE will build a RAC database and
    # BUILD_SI_DATABASE a single instance database (also in a RAC environment)
    # Default: yes
    BUILD_RAC_DATABASE=yes
    #BUILD_SI_DATABASE=yes
    #
    # Allows for database and listener to be started automatically at next
    # system boot. This option is only applicable in Single Instance mode.
    # In Single Instance/HA or RAC mode, the Clusterware starts up all
    # resources (listener, ASM, databases).
    # Default: yes
    CLONE_SI_DATABASE_AUTOSTART=yes
    #
    # Comma separated list of name value pairs for database initialization parameters
    # Use with care, no validation takes place.
    # For example: "sort_area_size=99999,control_file_record_keep_time=99"
    # Default: none
    #DBCA_INITORA_PARAMETERS=""
    #
    # Create a 12c Container Database allowing pluggable databases to be added
    # using options below, or at a later time.
    # Default: no
    DBCA_CONTAINER_DB=no
    #
    # Pluggable Database name. In 'createdb' operation a number is appended at the end
    # based on count (below). In 'deletepdb' exact name must be specified here or in
    # an environment variable.
    # Default: mypdb
    DBCA_PLUGGABLE_DB_NAME=mypdb
    #
    # Number of Pluggable Databases to create during a 'createdb' operation. A value
    # of zero (default) disables pluggable database creation.
    # Default: 0
    DBCA_PLUGGABLE_DB_COUNT=0
    #
    # Should a Policy Managed database be created taking into account the
    # options below. If set to 'no' an Admin Managed database is created.
    # Default: no
    DBCA_DATABASE_POLICY=no
    #
    # Create Server Pools (Policy Managed database).
    # Default: yes
    CLONE_CREATE_SERVERPOOLS=yes
    #
    # Recreate Server Pools; if already exist (Policy Managed database).
    # Default: no
    CLONE_RECREATE_SERVERPOOLS=no
    #
    # List of server pools to create (Policy Managed database).
    # Syntax is poolname:category:min:max
    # All except name can be omitted. Category can be Hub or Leaf.
    # Default: mypool
    CLONE_SERVERPOOLS="mypool"
    #
    # List of Server Pools to be used by the created database (Policy Managed database).
    # The server pools listed in DBCA_SERVERPOOLS must appear in CLONE_SERVERPOOLS
    # (and CLONE_CREATE_SERVERPOOLS set to yes), OR must be manually pre-created for
    # the create database to succeed.
    # Default: mypool
    DBCA_SERVERPOOLS="mypool"
    #
    # Database character set.
    # Default: WE8MSWIN1252 (previous default was AL32UTF8)
    # DATABASE_CHARACTERSET="WE8MSWIN1252"
    #
    # Use this DBCA template name, file must exist under $DBHOME/assistants/dbca/templates
    # Default: "General_Purpose.dbc"
    DBCA_TEMPLATE_NAME="General_Purpose.dbc"
    #
    # Should the database include the sample schema
    # Default: no
    DBCA_SAMPLE_SCHEMA=no
    #
    # Registers newly created database to be periodically monitored by Cluster Verification
    # Utility (CVU) on a continuous basis.
    # Default: no
    DBCA_RUN_CVU_PERIODICALLY=no
    #
    # Certain patches applied to the Oracle home require execution of some SQL post
    # database creation for the fix to be applied completely. These files are located
    # under patches/postsql subdirectory. It is possible to run them serially (adds
    # to overall build time), or in the background which is the default.
    # Note that when running in background these scripts may run a little longer after
    # the RAC Cluster + Database are finished building, however that should not cause
    # any issues. If overall build time is not a concern change this to NO and have
    # the scripts run as part of the actual build in serial.
    # Default: yes
    DBCA_POST_SQL_BG=yes
    #
    # An optional user custom SQL may be executed post database creation, default name of
    # script is user_custom_postsql.sql, it is located under patches/postsql subdirectory.
    # Default: user_custom_postsql.sql
    DBCA_POST_SQL_CUSTOM=user_custom_postsql.sql
    #
    # The Database Name
    # Default: ORCL
    DBNAME="ORCL"
    #
    # The Instance name, may be different than database name. Limited in length of
    # 1 to 8 for a RAC DB and 1 to 12 for Single Instance DB of alphanumeric characters.
    # Ignored for Policy Managed DB.
    # Default: ORCL
    SIDNAME="ORCL"
    #
    # Configure EM DB Express
    # Default: no
    CONFIGURE_DBEXPRESS=no
    #
    # DB Express port number. If left at the default, a free port will be assigned at
    # runtime, otherwise the port should be unused on all network adapters.
    # Default: 5500
    #DBEXPRESS_HTTPS_PORT=5500
    #
    # SCAN (Single Client Access Name) port number
    # Default: 1521
    SCANPORT=1521
    #
    # Local Listener port number
    # Default: 1521
    LISTENERPORT=1521
    #
    # Allows color coding of log messages, errors (red), warning (yellow),
    # info (green). By default no colors are used.
    # Default: NO
    CLONE_LOGWITH_COLORS=no
    #
    # END OF FILE
    #
    

Run the Deploycluster Tool

The Deploycluster tool can be run with several parameters; here we will use the following parameters:

  • -u specifies an Oracle VM Manager user.
  • -M provides a list of VMs.
  • -N specifies the netconfig file to be used during deployment (netconfig12cRAC4node.ini).
  • -P is a parameter that specifies the file to use for the building cluster (params12c.ini).
  • -D specifies the Dryrun mode, which allows you to see a simulation of the operation that will be performed.

Also, because of limited memory resources on the x86 machine, we will tell Deploycluster not to check the memory size of our VMs.

Perform the following steps:

  1. Change to the /SoftOracle/deploycluster directory.
  2. Edit the deploycluster.ini file by changing the DEPLOYCLUSTER_SKIP_VM_MEMORY_CHECK=no line to DEPLOYCLUSTER_SKIP_VM_MEMORY_CHECK=yes.
  3. Run the following command to run Deploycluster in Dryrun mode:

    ./deploycluster.py -u admin -M rac.? -N utils/netconfig12cRAC4node.ini -P utils/params12c.ini -D
    

    You will be asked for a password; use Welcome1.

    Running Deploycluster in Dryrun mode

    Figure 26. Running Deploycluster in Dryrun mode.

  4. Check for any errors. All steps should be green, as shown in Figure 26; if not, correct any issues.
  5. When all issues have been resolved, run the same command as before except do not use the -D parameter:

    ./deploycluster.py -u admin -M rac.? -N utils/netconfig12cRAC4node.ini -P utils/params12c.ini
    

    You will be asked for a password; use Welcome1.

  6. Change the default timeout for VNC consoles (setting it to 300 seconds instead of 30 seconds) by running the following commands.

    Note: When opening a VNC console for an Oracle VM guest for the first time, there are several warnings about security. It can take more than 30 seconds to read them and close the windows.

    # ssh root@192.168.56.3
    # cd /u01/app/oracle/ovm-manager-3/ovm_utils
    # ./ovm_managercontrol -u admin -p Welcome1 -h localhost -T 300 -c setsessiontimeout 300
    
  7. In the Oracle VM Manager GUI, check that all VMs are running and open a console on VM rac.0 by selecting the VM and clicking monitor icon .

    Checking the VMs and opening a console

    Figure 27. Checking the VMs and opening a console.

    It is possible to monitor the progress of the cluster installation by using ssh to connect to the first VM (rac.0) and looking at /u01/racovm/buildcluster.log. This log file will contain information for all commands executed in verbose mode, so you can see the various tools—such as clone.pl, netca, and emca—that are executed and their output. To do this, perform the following steps.

  8. Wait for the login prompt on VM rac.0.
  9. Connect to VM rac.0 using ssh (as defined in the netconfig12cRAC4node.ini file, the IP address of that VM is 192.168.56.10):

    Note: The password is ovsroot.

    ssh root@192.168.56.10
    
  10. Check the progress of the buildcluster operation in the /u01/racovm/buildcluster.log log file:

    tail -f /u01/racovm/buildcluster.log
    

Summary

Congratulations! You have completed this lab.

As you can see, we are pretty much at the limit of what we can achieve with a "small" x86 machine. As long as the buildcluster operation is progressing on each node, access to the VMs will be quite slow. Because of several resource limitations on the x86 machine (CPU, disk access, and network bandwidth), you will not be able to see the end of the deployment during this lab session. However, using similar instructions, you can create a two-node cluster or even a single instance cluster that will need fewer resources than a four-node Oracle RAC cluster.

Appendix A: Oracle Flex ASM and Flex Cluster

About Oracle Flex ASM

In a typical Oracle Grid Infrastructure installation, each node will have its own ASM instance running and act as the storage container for the databases running on the node. There is a single-point-of-failure threat with this setup. For example, if the ASM instance on the node suffers or fails, all the databases and instances running on the node will be impacted.

To avoid a single-point-of-failure for the ASM instance, Oracle Database 12c provides the Oracle Flex ASM feature. Oracle Flex ASM is a different concept and architecture all together. Only a few ASM instances need to run on a group of servers in the cluster. When an ASM instance fails on a node, Oracle Clusterware automatically starts reviving (replacing) the ASM instance on a different node to maintain availability. In addition, this setup also provides load balancing capabilities for ASM instances running on the node. Another advantage of Oracle Flex ASM is that it can be configured on a separate node.

Oracle Flex ASM

Figure 28. Oracle Flex ASM.

About Flex Cluster

Oracle Database 12c supports two types of cluster configurations at the time of Oracle Clusterware installation: a traditional, standard cluster and a Flex Cluster.

In a traditional, standard cluster, all nodes in the cluster are tightly integrated with each other, interact through a private network, and can access the storage directly. On the other hand, a Flex Cluster introduces two types of nodes arranged in a hub-and-leaf-nodes architecture. The hub nodes are arranged similar to nodes in a traditional; standard cluster; that is, they are interconnected through a private network and have direct read/write access to the storage. The leaf nodes are different from the hub nodes. They don't need to have direct access to the underlying storage; rather they access the storage and data through hub nodes.

You can configure up to 64 hub nodes, and many leaf nodes. In a Flex Cluster, you can have hub nodes without having leaf nodes configured, but no leaf nodes can exist without hub nodes. You can configure multiple leaf nodes to a single hub node. In a Flex Cluster, only hub nodes will have direct access to the Oracle Cluster Registry and voting disks. When you plan large-scale cluster environments, Flex Cluster would be a great feature to use. This sort of setup greatly reduces interconnect traffic and provides room to scale up the cluster to the traditional, standard cluster.

See Also

About the Author

Olivier Canonge is a systems sales consultant for Oracle in France.

Acknowledgments

Special thanks to Christophe Pauliat, Simon Coter, Saar Maoz, Doan Nguyen, Ludovic Sorriaux, Cecile Naud, and Robbie De Meyer for their contributions.

Revision 1.0, 01/10/2014

Follow us:
Blog | Facebook | Twitter | YouTube