Installing the Target Cluster

Part III of How to Upgrade to Oracle Solaris Cluster 4.0

by Tim Read

Published May 2012

Part I - Overview of the Example Configuration
Part II - Configuring the Oracle Database for Clustering
Part III - Installing the Target Cluster
Part IV - Creating the Zone Cluster
Part V - Installing the New Application Software Stack
Part VI - Creating the Standby Database
Part VII - Creating the Oracle Solaris Cluster Geographic Edition Configuration
Part VIII - How Oracle Solaris Cluster Geographic Edition Simplifies the Upgrade Process

After you assemble target cluster hardware and connect it to the network and storage infrastructure, perform a standard Oracle Solaris 11 installation on both nodes. For more information about installing Oracle Solaris 11, see Installing Oracle Solaris 11 Systems.

When both nodes have rebooted successfully, begin the process of installing the Oracle Solaris Cluster 4.0 software, as described below.

Caution: If you choose to use this article as a guide for performing a similar process, you need to pay close attention to the nodes on which the individual commands are run. For that reason, the system prompts shown in the example steps include both the node name and the user name to indicate both where, and as whom, a command must be run.

To be able to install the Oracle Solaris Cluster package incorporation, you must ensure that the URI for the Oracle Solaris Cluster repository is configured on both cluster nodes.

The example in Listing 1 shows downloading and saving the key and certificate files required to access the ha-cluster repository. Be sure to repeat the pkg set-publisher and pkg install commands on both cluster nodes.

ppyrus1 (root) # pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.certificate.pem \
-O https://pkg.oracle.com/ha-cluster/release ha-cluster
ppyrus1 (root) # ppyrus1 (root) # pkg publisher PUBLISHER TYPE STATUS URI solaris origin online http://pkg.oracle.com/solaris/release/ ha-cluster origin online https://pkg.oracle.com/ha-cluster/release/ ppyrus1 (root) # pkg install ha-cluster-full Packages to install: 63 Create boot environment: No Create backup boot environment: Yes Services to change: 9 DOWNLOAD PKGS FILES XFER (MB) Completed 63/63 8795/8795 71.1/71.1 PHASE ACTIONS Install Phase 11516/11516 PHASE ITEMS Package State Update Phase 63/63 Image State Update Phase 2/2 ppyrus1 (root) # ppyrus1 (root) # pkg info ha-cluster-full Name: ha-cluster/group-package/ha-cluster-full Summary: Oracle Solaris Cluster full installation group package Description: Oracle Solaris Cluster full installation group package Category: Meta Packages/Group Packages State: Installed Publisher: ha-cluster Version: 4.0.0 Build Release: 5.11 Branch: 0.22.1 Packaging Date: Tue Nov 15 01:10:10 2011 Size: 5.88 kB FMRI: pkg://ha-cluster/ha-cluster/group-package/ha-cluster-full@4.0.0,5.11-0.22.1:20111115T011010Z

Listing 1. Configuring the Repository on Both Nodes

When the Oracle Solaris Cluster software is installed, you can begin the configuration process. To guarantee that the cluster is not dependent on an external name service for these items, the hosts file on both cluster nodes contains the host name mappings shown in Listing 2.

ppyrus1 (root) # cat /etc/hosts
::1 localhost 
127.0.0.1 localhost loghost 
10.134.108.94   ppyrus1 ppyrus1a
10.134.108.95   ppyrus2 ppyrus2a

# Virtual IP for Oracle DB on gyruss 
10.134.108.108  vzgyruss1b oracle-gyruss-lh 
    
# Virtual IP for Geo Edition on gyruss
10.134.108.109  vzgyruss2a gyruss  
     
# Virtual IP for Oracle DB on pyrus
10.134.108.111  vzpyrus1a oracle-pyrus-lh

# Virtual IP for Geo Edition in zone cluster
10.134.108.112  vzpyrus1b oracle-zc 

10.134.108.122  vzpyrus3a	# Virtual IP for zone cluster node
10.134.108.123  vzpyrus3b	# Virtual IP for zone cluster node
10.134.33.88    cheetah-c3	# The quorum server system

Listing 2. Host Name Mappings

As shown in Listing 3, both cluster nodes have four bge network interfaces, where net0 (bge0) is connected to the public network and net1 (bge1) is connected to the private cluster network.

ppyrus1 (root) # dladm show-phys
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net1              Ethernet             unknown    0      unknown   bge1
net3              Ethernet             unknown    0      unknown   bge3
net0              Ethernet             up         1000   full      bge0
net2              Ethernet             unknown    0      unknown   bge2
ppyrus1 (root) # ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
net0/v4           static   ok           10.134.108.94/24
lo0/v6            static   ok           ::1/128

Listing 3. Network Interfaces

To install the Oracle Solaris Cluster software, run the scinstall program on the first cluster node (ppyrus1), as shown in Listing 4.

ppyrus1 (root) # export PATH=$PATH:/usr/cluster/bin
ppyrus1 (root) # scinstall


  *** Main Menu ***

    Please select from one of the following (*) options:

      * 1) Create a new cluster or add a cluster node
      * 2) Print release information for this cluster node

      * ?) Help with menu options
      * q) Quit

    Option:  1


  *** New Cluster and Cluster Node Menu ***

    Please select from any one of the following options:

        1) Create a new cluster
        2) Create just the first node of a new cluster on this machine
        3) Add this machine as a node in an existing cluster

        ?) Help with menu options
        q) Return to the Main Menu

    Option:  2


  *** Establish Just the First Node of a New Cluster ***


    This option is used to establish a new cluster using this machine as 
    the first node in that cluster.

    Before you select this option, the Oracle Solaris Cluster framework 
    software must already be installed. Use the Oracle Solaris Cluster 
    installation media or the IPS packaging system to install Oracle 
    Solaris Cluster software.

    Press Control-D at any time to return to the Main Menu.


    Do you want to continue (yes/no) [yes]?  yes


    Checking the value of property "local_only" of service svc:/network/rpc/bind ...
    Property "local_only" of service svc:/network/rpc/bind is already correctly set to "false" 
    on this node.
    
Press Enter to continue:  
    Checking whether NWAM is enabled on local node ... 


  >>> Typical or Custom Mode <<<

    This tool supports two modes of operation, Typical mode and Custom 
    mode. For most clusters, you can use Typical mode. However, you might 
    need to select the Custom mode option if not all of the Typical mode 
    defaults can be applied to your cluster.

    For more information about the differences between Typical and Custom 
    modes, select the Help option from the menu.

    Please select from one of the following options:

        1) Typical
        2) Custom

        ?) Help
        q) Return to the Main Menu

    Option [1]:  2


  >>> Cluster Name <<<

    Each cluster has a name assigned to it. The name can be made up of any
    characters other than whitespace. Each cluster name should be unique 
    within the namespace of your enterprise.

    What is the name of the cluster you want to establish?  pyrus


  >>> Check <<<

    This step allows you to run cluster check to verify that certain basic
    hardware and software pre-configuration requirements have been met. If
    cluster check detects potential problems with configuring this machine
    as a cluster node, a report of violated checks is prepared and 
    available for display on the screen.

    Do you want to run cluster check (yes/no) [yes]?  no


  >>> Cluster Nodes <<<

    This Oracle Solaris Cluster release supports a total of up to 16 
    nodes.

    List the names of the other nodes planned for the initial cluster 
    configuration. List one node name per line. When finished, type 
    Control-D:

    Node name (Control-D to finish):  ppyrus2
    Node name (Control-D to finish):  ^D


    This is the complete list of nodes:

        ppyrus1
        ppyrus2

    Is it correct (yes/no) [yes]?  yes


  >>> Authenticating Requests to Add Nodes <<<

    Once the first node establishes itself as a single node cluster, other
    nodes attempting to add themselves to the cluster configuration must 
    be found on the list of nodes you just provided. You can modify this 
    list by using claccess(1CL) or other tools once the cluster has been 
    established.

    By default, nodes are not securely authenticated as they attempt to 
    add themselves to the cluster configuration. This is generally 
    considered adequate, since nodes which are not physically connected to
    the private cluster interconnect will never be able to actually join 
    the cluster. However, DES authentication is available. If DES 
    authentication is selected, you must configure all necessary 
    encryption keys before any node will be allowed to join the cluster 
    (see keyserv(1M), publickey(4)).

    Do you need to use DES authentication (yes/no) [no]?  no


  >>> Minimum Number of Private Networks <<<

    Each cluster is typically configured with at least two private 
    networks. Configuring a cluster with just one private interconnect 
    provides less availability and will require the cluster to spend more 
    time in automatic recovery if that private interconnect fails.

    Should this cluster use at least two private networks (yes/no) [yes]?  no


  >>> Point-to-Point Cables <<<

    The two nodes of a two-node cluster may use a directly-connected 
    interconnect. That is, no cluster switches are configured. However, 
    when there are greater than two nodes, this interactive form of 
    scinstall assumes that there will be exactly one switch for each 
    private network.

    Does this two-node cluster use a switch (yes/no) [yes]?  yes


  >>> Cluster Switches <<<

    All cluster transport adapters in this cluster must be cabled to a 
    "switch". And, each adapter on a given node must be cabled to a 
    different switch. Interactive scinstall requires that you identify one
    switch for each private network in the cluster.

    What is the name of the switch in the cluster [switch1]?  switch1


  >>> Cluster Transport Adapters and Cables <<<

    Transport adapters are the adapters that attach to the private cluster
    interconnect.

    Select the cluster transport adapter:

        1) net1
        2) net2
        3) net3
        4) Other

    Option:  1

    Adapter "net1" is an Ethernet adapter.

    Searching for any unexpected network traffic on "net1" ... done
    Verification completed. No traffic was detected over a 10 second 
    sample period.

    The "dlpi" transport type will be set for this cluster.

    Name of the switch to which "net1" is connected [switch1]?  switch1

    Each adapter is cabled to a particular port on a switch. And, each 
    port is assigned a name. You can explicitly assign a name to each 
    port. Or, for Ethernet and Infiniband switches, you can choose to 
    allow scinstall to assign a default name for you. The default port 
    name assignment sets the name to the node number of the node hosting 
    the transport adapter at the other end of the cable.

    Use the default port name for the "net1" connection (yes/no) [yes]?  yes


  >>> Network Address for the Cluster Transport <<<

    The cluster transport uses a default network address of 172.16.0.0. If
    this IP address is already in use elsewhere within your enterprise, 
    specify another address from the range of recommended private 
    addresses (see RFC 1918 for details).

    The default netmask is 255.255.240.0. You can select another netmask, 
    as long as it minimally masks all bits that are given in the network 
    address.

    The default private netmask and network address result in an IP 
    address range that supports a cluster with a maximum of 64 nodes, 10 
    private networks, and 12 virtual clusters.

    Is it okay to accept the default network address (yes/no) [yes]?  yes

    Is it okay to accept the default netmask (yes/no) [yes]?  yes

    Plumbing network address 172.16.0.0 on adapter net1 >> NOT DUPLICATE ... done


  >>> Global Devices File System <<<

    Each node in the cluster must have a local file system mounted on 
    /global/.devices/node@<nodeID> before it can successfully participate 
    as a cluster member. Since the "nodeID" is not assigned until 
    scinstall is run, scinstall will set this up for you.

    You must supply the name of either an already-mounted file system or a
    raw disk partition which scinstall can use to create the global 
    devices file system. This file system or partition should be at least 
    512 MB in size.

    Alternatively, you can use a loopback file (lofi), with a new file 
    system, and mount it on /global/.devices/node@<nodeid>.

    If an already-mounted file system is used, the file system must be 
    empty. If a raw disk partition is used, a new file system will be 
    created for you.

    If the lofi method is used, scinstall creates a new 100 MB file system
    from a lofi device by using the file /.globaldevices. The lofi method 
    is typically preferred, since it does not require the allocation of a 
    dedicated disk slice.

    The default is to use lofi.


  >>> Set Global Fencing <<<

    Fencing is a mechanism that a cluster uses to protect data integrity 
    when the cluster interconnect between nodes is lost. By default, 
    fencing is turned on for global fencing, and each disk uses the global
    fencing setting. This screen allows you to turn off the global 
    fencing.

    Most of the time, leave fencing turned on. However, turn off fencing 
    when at least one of the following conditions is true: 1) Your shared 
    storage devices, such as Serial Advanced Technology Attachment (SATA) 
    disks, do not support SCSI; 2) You want to allow systems outside your 
    cluster to access storage devices attached to your cluster; 3) Oracle 
    Corporation has not qualified the SCSI persistent group reservation 
    (PGR) support for your shared storage devices.

    If you choose to turn off global fencing now, after your cluster 
    starts you can still use the cluster(1CL) command to turn on global 
    fencing.

    Do you want to turn off global fencing (yes/no) [no]?  no


  >>> Quorum Configuration <<<

    Every two-node cluster requires at least one quorum device. By 
    default, scinstall selects and configures a shared disk quorum device 
    for you.

    This screen allows you to disable the automatic selection and 
    configuration of a quorum device.

    You have chosen to turn on the global fencing. If your shared storage 
    devices do not support SCSI, such as Serial Advanced Technology 
    Attachment (SATA) disks, or if your shared disks do not support 
    SCSI-2, you must disable this feature.

    If you disable automatic quorum device selection now, or if you intend
    to use a quorum device that is not a shared disk, you must instead use
    clsetup(1M) to manually configure quorum once both nodes have joined 
    the cluster for the first time.

    Do you want to disable automatic quorum device selection (yes/no) [no]?  yes


  >>> Automatic Reboot <<<

    Once scinstall has successfully initialized the Oracle Solaris Cluster
    software for this machine, the machine must be rebooted. After the 
    reboot, this machine will be established as the first node in the new 
    cluster.

    Do you want scinstall to reboot for you (yes/no) [yes]?  no

    You will need to manually reboot this node in "cluster mode" after 
    scinstall successfully completes.

    
Press Enter to continue: 


  >>> Confirmation <<<

    Your responses indicate the following options to scinstall:

      scinstall -i \ 
           -C pyrus \ 
           -F \ 
           -G lofi \ 
           -T node=ppyrus1,node=ppyrus2,authtype=sys \ 
           -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10,numvirtualclusters=12 \ 
           -A trtype=dlpi,name=net1 \ 
           -B type=switch,name=switch1 \ 
           -m endpoint=:net1,endpoint=switch1

    Are these the options you want to use (yes/no) [yes]?  yes

    Do you want to continue with this configuration step (yes/no) [yes]?  yes


Initializing cluster name to "pyrus" ... done
Initializing authentication options ... done
Initializing configuration for adapter "net1" ... done
Initializing configuration for switch "switch1" ... done
Initializing configuration for cable ... done
Initializing private network address options ... done


Setting the node ID for "ppyrus1" ... done (id=1)



Initializing NTP configuration ... done

Updating nsswitch.conf ... done

Adding cluster node entries to /etc/inet/hosts ... done


Configuring IP multipathing groups ...done

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Oracle Solaris Cluster.
Please do not re-enable network routing.
    
Press Enter to continue:  



  *** New Cluster and Cluster Node Menu ***

    Please select from any one of the following options:

        1) Create a new cluster
        2) Create just the first node of a new cluster on this machine
        3) Add this machine as a node in an existing cluster

        ?) Help with menu options
        q) Return to the Main Menu

    Option:  q


  *** Main Menu ***

    Please select from one of the following (*) options:

        1) Create a new cluster or add a cluster node
      * 2) Print release information for this cluster node

      * ?) Help with menu options
      * q) Quit

    Option:  q



Log file - /var/cluster/logs/install/scinstall.log.1867

Listing 4. Installing the Oracle Solaris Cluster Software

Although Listing 4 shows choosing not to run the cluster check option, you should run this option during the creation of your production cluster because it helps identify potential problems with the configuration.

If no network IP multipathing (IPMP) groups exist, the scinstall program creates one for you. After the first cluster node is rebooted, you can see this addition, together with the interface configured for the private cluster network, by running the following command:

ppyrus1 (root) # ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
sc_ipmp0/static1  static   ok           10.134.108.94/24
clprivnet0/?      static   ok           172.16.4.1/23
lo0/v6            static   ok           ::1/128
net0/_a           static   ok           fe80::214:4fff:fe4d:9e59/10

Next, configure the second cluster node (ppyrus2), as shown in Listing 5.

ppyrus2 (root) # export PATH=$PATH:/usr/cluster/bin
ppyrus2 (root) # scinstall


  *** Main Menu ***

    Please select from one of the following (*) options:

      * 1) Create a new cluster or add a cluster node
      * 2) Print release information for this cluster node

      * ?) Help with menu options
      * q) Quit

    Option:  1


  *** New Cluster and Cluster Node Menu ***

    Please select from any one of the following options:

        1) Create a new cluster
        2) Create just the first node of a new cluster on this machine
        3) Add this machine as a node in an existing cluster

        ?) Help with menu options
        q) Return to the Main Menu

    Option:  3


  *** Add a Node to an Existing Cluster ***


    This option is used to add this machine as a node in an already 
    established cluster. If this is a new cluster, there may only be a 
    single node which has established itself in the new cluster.

    Before you select this option, the Oracle Solaris Cluster framework 
    software must already be installed. Use the Oracle Solaris Cluster 
    installation media or the IPS packaging system to install Oracle 
    Solaris Cluster software.

    Press Control-D at any time to return to the Main Menu.


    Do you want to continue (yes/no) [yes]?  


    Checking the value of property "local_only" of service svc:/network/rpc/bind ...
    Property "local_only" of service svc:/network/rpc/bind is already correctly set to "false" on this node.
    
Press Enter to continue:  
    Checking whether NWAM is enabled on local node ... 


  >>> Typical or Custom Mode <<<

    This tool supports two modes of operation, Typical mode and Custom 
    mode. For most clusters, you can use Typical mode. However, you might 
    need to select the Custom mode option if not all of the Typical mode 
    defaults can be applied to your cluster.

    For more information about the differences between Typical and Custom 
    modes, select the Help option from the menu.

    Please select from one of the following options:

        1) Typical
        2) Custom

        ?) Help
        q) Return to the Main Menu

    Option [1]:  2



  >>> Sponsoring Node <<<

    For any machine to join a cluster, it must identify a node in that 
    cluster willing to "sponsor" its membership in the cluster. When 
    configuring a new cluster, this "sponsor" node is typically the first 
    node used to build the new cluster. However, if the cluster is already
    established, the "sponsoring" node can be any node in that cluster.

    Already established clusters can keep a list of hosts which are able 
    to configure themselves as new cluster members. This machine should be
    in the join list of any cluster which it tries to join. If the list 
    does not include this machine, you may need to add it by using 
    claccess(1CL) or other tools.

    And, if the target cluster uses DES to authenticate new machines 
    attempting to configure themselves as new cluster members, the 
    necessary encryption keys must be configured before any attempt to 
    join.

    What is the name of the sponsoring node?  ppyrus1


  >>> Cluster Name <<<

    Each cluster has a name assigned to it. When adding a node to the 
    cluster, you must identify the name of the cluster you are attempting 
    to join. A sanity check is performed to verify that the "sponsoring" 
    node is a member of that cluster.

    What is the name of the cluster you want to join?  pyrus

    Attempting to contact "ppyrus1" ... done

    Cluster name "pyrus" is correct.
    
Press Enter to continue:  


  >>> Check <<<

    This step allows you to run cluster check to verify that certain basic
    hardware and software pre-configuration requirements have been met. If
    cluster check detects potential problems with configuring this machine
    as a cluster node, a report of violated checks is prepared and 
    available for display on the screen.

    Do you want to run cluster check (yes/no) [yes]?  no



  >>> Autodiscovery of Cluster Transport <<<

    If you are using Ethernet or Infiniband adapters as the cluster 
    transport adapters, autodiscovery is the best method for configuring 
    the cluster transport.

    Do you want to use autodiscovery (yes/no) [yes]?  yes


    Probing .......................

    The following connection was discovered:

        ppyrus1:net1  switch1  ppyrus2:net1

    Is it okay to configure this connection (yes/no) [yes]?  yes


  >>> Global Devices File System <<<

    Each node in the cluster must have a local file system mounted on 
    /global/.devices/node@<nodeID> before it can successfully participate 
    as a cluster member. Since the "nodeID" is not assigned until 
    scinstall is run, scinstall will set this up for you.

    You must supply the name of either an already-mounted file system or a
    raw disk partition which scinstall can use to create the global 
    devices file system. This file system or partition should be at least 
    512 MB in size.

    Alternatively, you can use a loopback file (lofi), with a new file 
    system, and mount it on /global/.devices/node@<nodeid>.

    If an already-mounted file system is used, the file system must be 
    empty. If a raw disk partition is used, a new file system will be 
    created for you.

    If the lofi method is used, scinstall creates a new 100 MB file system
    from a lofi device by using the file /.globaldevices. The lofi method 
    is typically preferred, since it does not require the allocation of a 
    dedicated disk slice.

    The default is to use lofi.


  >>> Automatic Reboot <<<

    Once scinstall has successfully initialized the Oracle Solaris Cluster
    software for this machine, the machine must be rebooted. The reboot 
    will cause this machine to join the cluster for the first time.

    Do you want scinstall to reboot for you (yes/no) [yes]?  no

    You will need to manually reboot this node in "cluster mode" after 
    scinstall successfully completes.

    
Press Enter to continue: 



  >>> Confirmation <<<

    Your responses indicate the following options to scinstall:

      scinstall -i \ 
           -C pyrus \ 
           -N ppyrus1 \ 
           -G lofi \ 
           -A trtype=dlpi,name=net1 \ 
           -m endpoint=:net1,endpoint=switch1

    Are these the options you want to use (yes/no) [yes]?  yes

    Do you want to continue with this configuration step (yes/no) [yes]?  yes


Adding node "ppyrus2" to the cluster configuration ... done
Adding adapter "net1" to the cluster configuration ... done
Adding cable to the cluster configuration ... done

Copying the config from "ppyrus1" ... done

Copying the postconfig file from "ppyrus1" if it exists ... done
No postconfig file found on "ppyrus1", continuing


Setting the node ID for "ppyrus2" ... done (id=2)

Verifying the major number for the "did" driver with "ppyrus1" ... done


Initializing NTP configuration ... done

Updating nsswitch.conf ... done

Adding cluster node entries to /etc/inet/hosts ... done


Configuring IP multipathing groups ...done

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Oracle Solaris Cluster.
Please do not re-enable network routing.

Updating file ("ntp.conf.cluster") on node ppyrus1 ... done
Updating file ("hosts") on node ppyrus1 ... done
    
Press Enter to continue: 


  *** New Cluster and Cluster Node Menu ***

    Please select from any one of the following options:

        1) Create a new cluster
        2) Create just the first node of a new cluster on this machine
        3) Add this machine as a node in an existing cluster

        ?) Help with menu options
        q) Return to the Main Menu

    Option:  q


  *** Main Menu ***

    Please select from one of the following (*) options:

        1) Create a new cluster or add a cluster node
      * 2) Print release information for this cluster node

      * ?) Help with menu options
      * q) Quit

    Option:  q



Log file - /var/cluster/logs/install/scinstall.log.1931

Listing 5. Configuring the Second Cluster Node

After the second node is rebooted successfully, configure a quorum device to complete the installation process. Although this action can be performed from either node, Listing 6 shows performing it from ppyrus1.

ppyrus1 (root) # export PATH=$PATH:/usr/cluster/bin
ppyrus1 (root) # clsetup


  >>> Initial Cluster Setup <<<

    This program has detected that the cluster "installmode" attribute is 
    still enabled. As such, certain initial cluster setup steps will be 
    performed at this time. This includes adding any necessary quorum 
    devices, then resetting both the quorum vote counts and the 
    "installmode" property.

    Please do not proceed if any additional nodes have yet to join the 
    cluster.

    Is it okay to continue (yes/no) [yes]?  yes

    Do you want to add any quorum devices (yes/no) [yes]?  yes

    Following are supported Quorum Devices types in Oracle Solaris 
    Cluster. Please refer to Oracle Solaris Cluster documentation for 
    detailed information on these supported quorum device topologies.

    What is the type of device you want to use?

        1) Directly attached shared disk
        2) Network Attached Storage (NAS) from Network Appliance
        3) Quorum Server

        q) Return to the quorum menu

    Option:  3


  >>> Add a Quorum Server Quorum Device <<<

    A Quorum Server process runs on a machine outside Oracle Solaris 
    Cluster and serves the cluster as a quorum device. Before configuring 
    the quorum server as a quorum device into the cluster, you will need 
    to setup the quorum server machine and start the quorum server 
    process. For detailed information on setting up a quorum server, refer
    to Oracle Solaris Cluster system administration guide.

    You will need to specify a device name for the quorum server quorum 
    device, which must be unique across all quorum devices, the IP address
    of the quorum server machine, or hostname if the machine is added into
    /etc/hosts, and a port number on the quorum server machine used to 
    communicate with the cluster nodes. Please refer to the clquorum(1M) 
    man page and other Oracle Solaris Cluster documentation for details.

    Is it okay to continue (yes/no) [yes]?  

    What name do you want to use for this quorum device?  cheetah_c3_qs_9001

    What is the IP address of the quorum server machine?  10.134.33.88

    What is the port number on the quorum server machine?  9001

    Is it okay to proceed with the update (yes/no) [yes]?  yes

/usr/cluster/bin/clquorum add -t quorum_server -p qshost=10.134.33.88 -p port=9001 cheetah_c3_qs_9001

    Command completed successfully.

    
Press Enter to continue:  
 

    Do you want to add another quorum device (yes/no) [yes]?  no

    Once the "installmode" property has been reset, this program will skip
    "Initial Cluster Setup" each time it is run again in the future. 
    However, quorum devices can always be added to the cluster using the 
    regular menu options. Resetting this property fully activates quorum 
    settings and is necessary for the normal and safe operation of the 
    cluster.

    Is it okay to reset "installmode" (yes/no) [yes]?  yes


/usr/cluster/bin/clquorum reset
/usr/cluster/bin/claccess deny-all

    Cluster initialization is complete.


    Type ENTER to proceed to the main menu:  


  *** Main Menu ***

    Please select from one of the following options:

        1) Quorum
        2) Resource groups
        3) Data Services
        4) Cluster interconnect
        5) Device groups and volumes
        6) Private hostnames
        7) New nodes
        8) Other cluster tasks

        ?) Help with menu options
        q) Quit

    Option:  q

Listing 6. Configuring a Quorum Device

For more information about configuring and installing the Oracle Solaris Cluster 4.0 software, see the Oracle Solaris Cluster Software Installation Guide.

Revision 1.0, 05/01/2012

Follow us on Facebook, Twitter, or Oracle Blogs.