文章
服务器与存储管理
作者:Tim Read
2012 年 5 月发布
第 1 部分 — 示例配置概述组装集群硬件并将其连接到网络和存储基础架构之后,在这两个节点上执行标准 Oracle Solaris 11 安装。有关如何安装 Oracle Solaris 11 的详细信息,请参见安装 Oracle Solaris 11 系统。
两个节点都成功重新启动时,开始安装 Oracle Solaris Cluster 4.0 软件的过程,如下所示。
注意:如果选择使用本文作为执行类似过程的指南,需要密切注意各命令运行的节点。因此,本示例步骤中显示的系统提示包括节点名称和用户名以指示命令必须运行在什么位置以及必须由谁来运行。
为了能够安装 Oracle Solaris Cluster 软件包整合,必须确保在两个集群节点上都配置了 Oracle Solaris Cluster 信息库的 URI。
清单 1 中的示例显示了下载和保存访问 ha-cluster 信息库所需的密钥和证书文件的过程。确保在两个集群节点上都重复 pkg set-publisher 和 pkg install 命令。
ppyrus1 (root) # pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.certificate.pem \
-O https://pkg.oracle.com/ha-cluster/release ha-cluster ppyrus1 (root) # ppyrus1 (root) # pkg publisher PUBLISHER TYPE STATUS URI solaris origin online http://pkg.oracle.com/solaris/release/ ha-cluster origin online https://pkg.oracle.com/ha-cluster/release/ ppyrus1 (root) # pkg install ha-cluster-full Packages to install: 63 Create boot environment: No Create backup boot environment: Yes Services to change: 9 DOWNLOAD PKGS FILES XFER (MB) Completed 63/63 8795/8795 71.1/71.1 PHASE ACTIONS Install Phase 11516/11516 PHASE ITEMS Package State Update Phase 63/63 Image State Update Phase 2/2 ppyrus1 (root) # ppyrus1 (root) # pkg info ha-cluster-full Name: ha-cluster/group-package/ha-cluster-full Summary: Oracle Solaris Cluster full installation group package Description: Oracle Solaris Cluster full installation group package Category: Meta Packages/Group Packages State: Installed Publisher: ha-cluster Version: 4.0.0 Build Release: 5.11 Branch: 0.22.1 Packaging Date: Tue Nov 15 01:10:10 2011 Size: 5.88 kB FMRI: pkg://ha-cluster/ha-cluster/group-package/ha-cluster-full@4.0.0,5.11-0.22.1:20111115T011010Z
清单 1. 在两个节点上配置信息库
安装 Oracle Solaris Cluster 软件时,可以开始配置过程。为了保证集群不依赖于这些项的外部名称服务,两个集群节点上的 hosts 文件包含清单 2 中所示的主机名称映射。
ppyrus1 (root) # cat /etc/hosts
::1 localhost
127.0.0.1 localhost loghost
10.134.108.94 ppyrus1 ppyrus1a
10.134.108.95 ppyrus2 ppyrus2a
# Virtual IP for Oracle DB on gyruss
10.134.108.108 vzgyruss1b oracle-gyruss-lh
# Virtual IP for Geo Edition on gyruss
10.134.108.109 vzgyruss2a gyruss
# Virtual IP for Oracle DB on pyrus
10.134.108.111 vzpyrus1a oracle-pyrus-lh
# Virtual IP for Geo Edition in zone cluster
10.134.108.112 vzpyrus1b oracle-zc
10.134.108.122 vzpyrus3a # Virtual IP for zone cluster node
10.134.108.123 vzpyrus3b # Virtual IP for zone cluster node
10.134.33.88 cheetah-c3 # The quorum server system
清单 2. 主机名称映射
如清单 3 所示,两个集群节点都有四个 bge 网络接口,其中 net0 (bge0) 连接到公共网络,net1 (bge1) 连接到专用集群网络。
ppyrus1 (root) # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE net1 Ethernet unknown 0 unknown bge1 net3 Ethernet unknown 0 unknown bge3 net0 Ethernet up 1000 full bge0 net2 Ethernet unknown 0 unknown bge2 ppyrus1 (root) # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/v4 static ok 127.0.0.1/8 net0/v4 static ok 10.134.108.94/24 lo0/v6 static ok ::1/128
清单 3. 网络接口
要安装 Oracle Solaris Cluster 软件,请在第一个集群节点 (ppyrus1) 上运行 scinstall 程序,如清单 4 所示。
ppyrus1 (root) # export PATH=$PATH:/usr/cluster/bin
ppyrus1 (root) # scinstall
*** Main Menu ***
Please select from one of the following (*) options:
* 1) Create a new cluster or add a cluster node
* 2) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 1
*** New Cluster and Cluster Node Menu ***
Please select from any one of the following options:
1) Create a new cluster
2) Create just the first node of a new cluster on this machine
3) Add this machine as a node in an existing cluster
?) Help with menu options
q) Return to the Main Menu
Option: 2
*** Establish Just the First Node of a New Cluster ***
This option is used to establish a new cluster using this machine as
the first node in that cluster.
Before you select this option, the Oracle Solaris Cluster framework
software must already be installed. Use the Oracle Solaris Cluster
installation media or the IPS packaging system to install Oracle
Solaris Cluster software.
Press Control-D at any time to return to the Main Menu.
Do you want to continue (yes/no) [yes]? yes
Checking the value of property "local_only" of service svc:/network/rpc/bind ...
Property "local_only" of service svc:/network/rpc/bind is already correctly set to "false"
on this node.
Press Enter to continue:
Checking whether NWAM is enabled on local node ...
>>> Typical or Custom Mode <<<
This tool supports two modes of operation, Typical mode and Custom
mode. For most clusters, you can use Typical mode. However, you might
need to select the Custom mode option if not all of the Typical mode
defaults can be applied to your cluster.
For more information about the differences between Typical and Custom
modes, select the Help option from the menu.
Please select from one of the following options:
1) Typical
2) Custom
?) Help
q) Return to the Main Menu
Option [1]: 2
>>> Cluster Name <<<
Each cluster has a name assigned to it. The name can be made up of any
characters other than whitespace. Each cluster name should be unique
within the namespace of your enterprise.
What is the name of the cluster you want to establish? pyrus
>>> Check <<<
This step allows you to run cluster check to verify that certain basic
hardware and software pre-configuration requirements have been met. If
cluster check detects potential problems with configuring this machine
as a cluster node, a report of violated checks is prepared and
available for display on the screen.
Do you want to run cluster check (yes/no) [yes]? no
>>> Cluster Nodes <<<
This Oracle Solaris Cluster release supports a total of up to 16
nodes.
List the names of the other nodes planned for the initial cluster
configuration. List one node name per line. When finished, type
Control-D:
Node name (Control-D to finish): ppyrus2
Node name (Control-D to finish): ^D
This is the complete list of nodes:
ppyrus1
ppyrus2
Is it correct (yes/no) [yes]? yes
>>> Authenticating Requests to Add Nodes <<<
Once the first node establishes itself as a single node cluster, other
nodes attempting to add themselves to the cluster configuration must
be found on the list of nodes you just provided. You can modify this
list by using claccess(1CL) or other tools once the cluster has been
established.
By default, nodes are not securely authenticated as they attempt to
add themselves to the cluster configuration. This is generally
considered adequate, since nodes which are not physically connected to
the private cluster interconnect will never be able to actually join
the cluster. However, DES authentication is available. If DES
authentication is selected, you must configure all necessary
encryption keys before any node will be allowed to join the cluster
(see keyserv(1M), publickey(4)).
Do you need to use DES authentication (yes/no) [no]? no
>>> Minimum Number of Private Networks <<<
Each cluster is typically configured with at least two private
networks. Configuring a cluster with just one private interconnect
provides less availability and will require the cluster to spend more
time in automatic recovery if that private interconnect fails.
Should this cluster use at least two private networks (yes/no) [yes]? no
>>> Point-to-Point Cables <<<
The two nodes of a two-node cluster may use a directly-connected
interconnect. That is, no cluster switches are configured. However,
when there are greater than two nodes, this interactive form of
scinstall assumes that there will be exactly one switch for each
private network.
Does this two-node cluster use a switch (yes/no) [yes]? yes
>>> Cluster Switches <<<
All cluster transport adapters in this cluster must be cabled to a
"switch". And, each adapter on a given node must be cabled to a
different switch. Interactive scinstall requires that you identify one
switch for each private network in the cluster.
What is the name of the switch in the cluster [switch1]? switch1
>>> Cluster Transport Adapters and Cables <<<
Transport adapters are the adapters that attach to the private cluster
interconnect.
Select the cluster transport adapter:
1) net1
2) net2
3) net3
4) Other
Option: 1
Adapter "net1" is an Ethernet adapter.
Searching for any unexpected network traffic on "net1" ... done
Verification completed. No traffic was detected over a 10 second
sample period.
The "dlpi" transport type will be set for this cluster.
Name of the switch to which "net1" is connected [switch1]? switch1
Each adapter is cabled to a particular port on a switch. And, each
port is assigned a name. You can explicitly assign a name to each
port. Or, for Ethernet and Infiniband switches, you can choose to
allow scinstall to assign a default name for you. The default port
name assignment sets the name to the node number of the node hosting
the transport adapter at the other end of the cable.
Use the default port name for the "net1" connection (yes/no) [yes]? yes
>>> Network Address for the Cluster Transport <<<
The cluster transport uses a default network address of 172.16.0.0. If
this IP address is already in use elsewhere within your enterprise,
specify another address from the range of recommended private
addresses (see RFC 1918 for details).
The default netmask is 255.255.240.0. You can select another netmask,
as long as it minimally masks all bits that are given in the network
address.
The default private netmask and network address result in an IP
address range that supports a cluster with a maximum of 64 nodes, 10
private networks, and 12 virtual clusters.
Is it okay to accept the default network address (yes/no) [yes]? yes
Is it okay to accept the default netmask (yes/no) [yes]? yes
Plumbing network address 172.16.0.0 on adapter net1 >> NOT DUPLICATE ... done
>>> Global Devices File System <<<
Each node in the cluster must have a local file system mounted on
/global/.devices/node@<nodeID> before it can successfully participate
as a cluster member. Since the "nodeID" is not assigned until
scinstall is run, scinstall will set this up for you.
You must supply the name of either an already-mounted file system or a
raw disk partition which scinstall can use to create the global
devices file system. This file system or partition should be at least
512 MB in size.
Alternatively, you can use a loopback file (lofi), with a new file
system, and mount it on /global/.devices/node@<nodeid>.
If an already-mounted file system is used, the file system must be
empty. If a raw disk partition is used, a new file system will be
created for you.
If the lofi method is used, scinstall creates a new 100 MB file system
from a lofi device by using the file /.globaldevices. The lofi method
is typically preferred, since it does not require the allocation of a
dedicated disk slice.
The default is to use lofi.
>>> Set Global Fencing <<<
Fencing is a mechanism that a cluster uses to protect data integrity
when the cluster interconnect between nodes is lost. By default,
fencing is turned on for global fencing, and each disk uses the global
fencing setting. This screen allows you to turn off the global
fencing.
Most of the time, leave fencing turned on. However, turn off fencing
when at least one of the following conditions is true: 1) Your shared
storage devices, such as Serial Advanced Technology Attachment (SATA)
disks, do not support SCSI; 2) You want to allow systems outside your
cluster to access storage devices attached to your cluster; 3) Oracle
Corporation has not qualified the SCSI persistent group reservation
(PGR) support for your shared storage devices.
If you choose to turn off global fencing now, after your cluster
starts you can still use the cluster(1CL) command to turn on global
fencing.
Do you want to turn off global fencing (yes/no) [no]? no
>>> Quorum Configuration <<<
Every two-node cluster requires at least one quorum device. By
default, scinstall selects and configures a shared disk quorum device
for you.
This screen allows you to disable the automatic selection and
configuration of a quorum device.
You have chosen to turn on the global fencing. If your shared storage
devices do not support SCSI, such as Serial Advanced Technology
Attachment (SATA) disks, or if your shared disks do not support
SCSI-2, you must disable this feature.
If you disable automatic quorum device selection now, or if you intend
to use a quorum device that is not a shared disk, you must instead use
clsetup(1M) to manually configure quorum once both nodes have joined
the cluster for the first time.
Do you want to disable automatic quorum device selection (yes/no) [no]? yes
>>> Automatic Reboot <<<
Once scinstall has successfully initialized the Oracle Solaris Cluster
software for this machine, the machine must be rebooted. After the
reboot, this machine will be established as the first node in the new
cluster.
Do you want scinstall to reboot for you (yes/no) [yes]? no
You will need to manually reboot this node in "cluster mode" after
scinstall successfully completes.
Press Enter to continue:
>>> Confirmation <<<
Your responses indicate the following options to scinstall:
scinstall -i \
-C pyrus \
-F \
-G lofi \
-T node=ppyrus1,node=ppyrus2,authtype=sys \
-w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10,numvirtualclusters=12 \
-A trtype=dlpi,name=net1 \
-B type=switch,name=switch1 \
-m endpoint=:net1,endpoint=switch1
Are these the options you want to use (yes/no) [yes]? yes
Do you want to continue with this configuration step (yes/no) [yes]? yes
Initializing cluster name to "pyrus" ... done
Initializing authentication options ... done
Initializing configuration for adapter "net1" ... done
Initializing configuration for switch "switch1" ... done
Initializing configuration for cable ... done
Initializing private network address options ... done
Setting the node ID for "ppyrus1" ... done (id=1)
Initializing NTP configuration ... done
Updating nsswitch.conf ... done
Adding cluster node entries to /etc/inet/hosts ... done
Configuring IP multipathing groups ...done
Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Oracle Solaris Cluster.
Please do not re-enable network routing.
Press Enter to continue:
*** New Cluster and Cluster Node Menu ***
Please select from any one of the following options:
1) Create a new cluster
2) Create just the first node of a new cluster on this machine
3) Add this machine as a node in an existing cluster
?) Help with menu options
q) Return to the Main Menu
Option: q
*** Main Menu ***
Please select from one of the following (*) options:
1) Create a new cluster or add a cluster node
* 2) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: q
Log file - /var/cluster/logs/install/scinstall.log.1867
清单 4. 安装 Oracle Solaris Cluster 软件
尽管清单 4 显示选择不运行集群检查选项,但在创建生产集群期间应运行此选项,因为它有助于识别配置中的潜在问题。
如果网络 IP 多路径 (IPMP) 组不存在,scinstall 程序会为您创建一个。第一个集群节点重新启动之后,可以通过运行以下命令见到此添加以及为专用集群网络配置的接口:
ppyrus1 (root) # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/v4 static ok 127.0.0.1/8 sc_ipmp0/static1 static ok 10.134.108.94/24 clprivnet0/? static ok 172.16.4.1/23 lo0/v6 static ok ::1/128 net0/_a static ok fe80::214:4fff:fe4d:9e59/10
接下来,配置第二个集群节点 (ppyrus2),如清单 5 所示。
ppyrus2 (root) # export PATH=$PATH:/usr/cluster/bin
ppyrus2 (root) # scinstall
*** Main Menu ***
Please select from one of the following (*) options:
* 1) Create a new cluster or add a cluster node
* 2) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 1
*** New Cluster and Cluster Node Menu ***
Please select from any one of the following options:
1) Create a new cluster
2) Create just the first node of a new cluster on this machine
3) Add this machine as a node in an existing cluster
?) Help with menu options
q) Return to the Main Menu
Option: 3
*** Add a Node to an Existing Cluster ***
This option is used to add this machine as a node in an already
established cluster. If this is a new cluster, there may only be a
single node which has established itself in the new cluster.
Before you select this option, the Oracle Solaris Cluster framework
software must already be installed. Use the Oracle Solaris Cluster
installation media or the IPS packaging system to install Oracle
Solaris Cluster software.
Press Control-D at any time to return to the Main Menu.
Do you want to continue (yes/no) [yes]?
Checking the value of property "local_only" of service svc:/network/rpc/bind ...
Property "local_only" of service svc:/network/rpc/bind is already correctly set to "false" on this node.
Press Enter to continue:
Checking whether NWAM is enabled on local node ...
>>> Typical or Custom Mode <<<
This tool supports two modes of operation, Typical mode and Custom
mode. For most clusters, you can use Typical mode. However, you might
need to select the Custom mode option if not all of the Typical mode
defaults can be applied to your cluster.
For more information about the differences between Typical and Custom
modes, select the Help option from the menu.
Please select from one of the following options:
1) Typical
2) Custom
?) Help
q) Return to the Main Menu
Option [1]: 2
>>> Sponsoring Node <<<
For any machine to join a cluster, it must identify a node in that
cluster willing to "sponsor" its membership in the cluster. When
configuring a new cluster, this "sponsor" node is typically the first
node used to build the new cluster. However, if the cluster is already
established, the "sponsoring" node can be any node in that cluster.
Already established clusters can keep a list of hosts which are able
to configure themselves as new cluster members. This machine should be
in the join list of any cluster which it tries to join. If the list
does not include this machine, you may need to add it by using
claccess(1CL) or other tools.
And, if the target cluster uses DES to authenticate new machines
attempting to configure themselves as new cluster members, the
necessary encryption keys must be configured before any attempt to
join.
What is the name of the sponsoring node? ppyrus1
>>> Cluster Name <<<
Each cluster has a name assigned to it. When adding a node to the
cluster, you must identify the name of the cluster you are attempting
to join. A sanity check is performed to verify that the "sponsoring"
node is a member of that cluster.
What is the name of the cluster you want to join? pyrus
Attempting to contact "ppyrus1" ... done
Cluster name "pyrus" is correct.
Press Enter to continue:
>>> Check <<<
This step allows you to run cluster check to verify that certain basic
hardware and software pre-configuration requirements have been met. If
cluster check detects potential problems with configuring this machine
as a cluster node, a report of violated checks is prepared and
available for display on the screen.
Do you want to run cluster check (yes/no) [yes]? no
>>> Autodiscovery of Cluster Transport <<<
If you are using Ethernet or Infiniband adapters as the cluster
transport adapters, autodiscovery is the best method for configuring
the cluster transport.
Do you want to use autodiscovery (yes/no) [yes]? yes
Probing .......................
The following connection was discovered:
ppyrus1:net1 switch1 ppyrus2:net1
Is it okay to configure this connection (yes/no) [yes]? yes
>>> Global Devices File System <<<
Each node in the cluster must have a local file system mounted on
/global/.devices/node@<nodeID> before it can successfully participate
as a cluster member. Since the "nodeID" is not assigned until
scinstall is run, scinstall will set this up for you.
You must supply the name of either an already-mounted file system or a
raw disk partition which scinstall can use to create the global
devices file system. This file system or partition should be at least
512 MB in size.
Alternatively, you can use a loopback file (lofi), with a new file
system, and mount it on /global/.devices/node@<nodeid>.
If an already-mounted file system is used, the file system must be
empty. If a raw disk partition is used, a new file system will be
created for you.
If the lofi method is used, scinstall creates a new 100 MB file system
from a lofi device by using the file /.globaldevices. The lofi method
is typically preferred, since it does not require the allocation of a
dedicated disk slice.
The default is to use lofi.
>>> Automatic Reboot <<<
Once scinstall has successfully initialized the Oracle Solaris Cluster
software for this machine, the machine must be rebooted. The reboot
will cause this machine to join the cluster for the first time.
Do you want scinstall to reboot for you (yes/no) [yes]? no
You will need to manually reboot this node in "cluster mode" after
scinstall successfully completes.
Press Enter to continue:
>>> Confirmation <<<
Your responses indicate the following options to scinstall:
scinstall -i \
-C pyrus \
-N ppyrus1 \
-G lofi \
-A trtype=dlpi,name=net1 \
-m endpoint=:net1,endpoint=switch1
Are these the options you want to use (yes/no) [yes]? yes
Do you want to continue with this configuration step (yes/no) [yes]? yes
Adding node "ppyrus2" to the cluster configuration ... done
Adding adapter "net1" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Copying the config from "ppyrus1" ... done
Copying the postconfig file from "ppyrus1" if it exists ... done
No postconfig file found on "ppyrus1", continuing
Setting the node ID for "ppyrus2" ... done (id=2)
Verifying the major number for the "did" driver with "ppyrus1" ... done
Initializing NTP configuration ... done
Updating nsswitch.conf ... done
Adding cluster node entries to /etc/inet/hosts ... done
Configuring IP multipathing groups ...done
Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Oracle Solaris Cluster.
Please do not re-enable network routing.
Updating file ("ntp.conf.cluster") on node ppyrus1 ... done
Updating file ("hosts") on node ppyrus1 ... done
Press Enter to continue:
*** New Cluster and Cluster Node Menu ***
Please select from any one of the following options:
1) Create a new cluster
2) Create just the first node of a new cluster on this machine
3) Add this machine as a node in an existing cluster
?) Help with menu options
q) Return to the Main Menu
Option: q
*** Main Menu ***
Please select from one of the following (*) options:
1) Create a new cluster or add a cluster node
* 2) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: q
Log file - /var/cluster/logs/install/scinstall.log.1931
清单 5. 配置第二个集群节点
第二个节点成功重新启动之后,配置一个仲裁设备以完成安装过程。尽管可以从任一节点执行此操作,但清单 6 显示从 ppyrus1 执行。
ppyrus1 (root) # export PATH=$PATH:/usr/cluster/bin
ppyrus1 (root) # clsetup
>>> Initial Cluster Setup <<<
This program has detected that the cluster "installmode" attribute is
still enabled. As such, certain initial cluster setup steps will be
performed at this time. This includes adding any necessary quorum
devices, then resetting both the quorum vote counts and the
"installmode" property.
Please do not proceed if any additional nodes have yet to join the
cluster.
Is it okay to continue (yes/no) [yes]? yes
Do you want to add any quorum devices (yes/no) [yes]? yes
Following are supported Quorum Devices types in Oracle Solaris
Cluster. Please refer to Oracle Solaris Cluster documentation for
detailed information on these supported quorum device topologies.
What is the type of device you want to use?
1) Directly attached shared disk
2) Network Attached Storage (NAS) from Network Appliance
3) Quorum Server
q) Return to the quorum menu
Option: 3
>>> Add a Quorum Server Quorum Device <<<
A Quorum Server process runs on a machine outside Oracle Solaris
Cluster and serves the cluster as a quorum device. Before configuring
the quorum server as a quorum device into the cluster, you will need
to setup the quorum server machine and start the quorum server
process. For detailed information on setting up a quorum server, refer
to Oracle Solaris Cluster system administration guide.
You will need to specify a device name for the quorum server quorum
device, which must be unique across all quorum devices, the IP address
of the quorum server machine, or hostname if the machine is added into
/etc/hosts, and a port number on the quorum server machine used to
communicate with the cluster nodes. Please refer to the clquorum(1M)
man page and other Oracle Solaris Cluster documentation for details.
Is it okay to continue (yes/no) [yes]?
What name do you want to use for this quorum device? cheetah_c3_qs_9001
What is the IP address of the quorum server machine? 10.134.33.88
What is the port number on the quorum server machine? 9001
Is it okay to proceed with the update (yes/no) [yes]? yes
/usr/cluster/bin/clquorum add -t quorum_server -p qshost=10.134.33.88 -p port=9001 cheetah_c3_qs_9001
Command completed successfully.
Press Enter to continue:
Do you want to add another quorum device (yes/no) [yes]? no
Once the "installmode" property has been reset, this program will skip
"Initial Cluster Setup" each time it is run again in the future.
However, quorum devices can always be added to the cluster using the
regular menu options. Resetting this property fully activates quorum
settings and is necessary for the normal and safe operation of the
cluster.
Is it okay to reset "installmode" (yes/no) [yes]? yes
/usr/cluster/bin/clquorum reset
/usr/cluster/bin/claccess deny-all
Cluster initialization is complete.
Type ENTER to proceed to the main menu:
*** Main Menu ***
Please select from one of the following options:
1) Quorum
2) Resource groups
3) Data Services
4) Cluster interconnect
5) Device groups and volumes
6) Private hostnames
7) New nodes
8) Other cluster tasks
?) Help with menu options
q) Quit
Option: q
清单 6. 配置仲裁设备
有关如何配置和安装 Oracle Solaris Cluster 4.0 软件的详细信息,请参见 Oracle Solaris Cluster 软件安装指南。
| 修订版 1.0,2012 年 5 月 1 日 |