Published April 2014
For the purposes of development and testing, an Oracle Cluster File System Version 2 (OCFS2) file system can be set up in a virtual environment using Oracle VM VirtualBox on desktop-class hardware. You can build the nodes the same way you would on a production system after some minor command-line interface (CLI) modification of the virtual desktop image (VDI) files. For the purposes of development and testing, a three-node OCFS2 cluster is the most useful for being able to show the interaction between the nodes.
This article is designed to assist with the setup of OCFS2 within Oracle VM VirtualBox and is not a complete reference on how to set up the OCFS2 file system. Please see the "See Also" section at the end of this article for additional information on the specifics of setting up OCFS2.
OCFS2 is a high-performance, high availability, POSIX-compliant, general-purpose file system for Linux. It is a versatile clustered file system that can be used with applications that are cluster-aware as well as with those that are not cluster-aware. As of 2006, OCFS2 is fully integrated into the mainline Linux kernel and is available on most Linux distributions. Please see the "See Also" section of this article for more information on Oracle's OCFS2 project.
When testing OCFS2 within Oracle VM VirtualBox it's important to understand some of the strengths and weaknesses of testing in this manner. One of the major strengths is the speed and ease of setting up the testing environment. Testing failover and proof-of-concept for application suitability also works well in a virtualized environment.
However, true redundancy is not available because the nodes are running on a single machine. Also, it's important to keep in mind that a hardware failure on the host system will bring down the testing environment. Performance testing is also not practical due to the lack of physical hardware and networking in the virtual environment.
To set up OCFS2 within Oracle VM VirtualBox, you will need to create three Oracle Linux virtual machines (VMs) within Oracle VM VirtualBox. You will need an ISO image of the version of Oracle Linux you wish to install, which can be obtained from Oracle E-Delivery. Once you have downloaded the ISO image, you are ready to start creating VMs.
To use shared storage for OCFS2, you will need to create a single VM with an additional VDI that will be modified later via the command line to be shared. Once this first VM is created and the VDI is modified, you can then build the second two VMs and add the shared storage. The instructions below will guide you through the process.
Figure 1. Creating a VM in Oracle VM VirtualBox.
Figure 2. Enabling a dedicated network interface for OCFS2.
Figure 3. Selecting the ISO image for booting.
Figure 4. Creating a separate VDI file.
VboxManageutility. Once you have created the VDI file, you need to identify its universally unique identifier (UUID) for use with the
On a Windows machine, select Start > Run, type
cmd, and press Enter. This will bring you to a command prompt so you can access the
VboxManage utility. Then type the following commands to identify the UUID from the VDI file.
VboxManage list hdds command also works on Linux and Mac systems. However, on those systems, it is not necessary to change directories, as shown in the following Windows example. The
VboxManage utility works the same on both Linux and Mac systems.
cd C:\Program Files\Oracle\Virtualbox VboxManage list hdds UUID: 702d44e7-c234-421f-880b-335da09d8414 Parent UUID: base Format: VDI Location: C:\Users\testenv\VirtualBox VMs\OCFS2-1\LUN3.vdi State: created Type: shareable
VboxManage modifyhd 702d44e7-c234-421f-880b-335da09d8414 -type shareable
VboxMangeutility, create two additional VMs by repeating the same steps above, except you do not need to create the shared VDI file again nor do you need to modify the shared storage via the command line. When creating the two VMs, add the shared storage by selecting the Add Hard Disk option as before. Then select Choose Existing Disk and select the shared VDI file you created before.
Figure 5. Creating the other VMs.
At this point you should have three VMs within Oracle VM VirtualBox, all with a private network adapter configured and all with access to the shared storage.
Once the operating systems are installed on the nodes, there are a few configuration changes that need to be performed on each node to allow them to use the private network.
eth1is being used as the private network interface.
Node 1 10.0.0.1 Node 2 10.0.0.2 Node 3 10.0.0.3
/etc/sysconfig/network-scripts/ifcfg-eth1, so that the first node can communicate on the private network. Here is example content to put in the file:
DEVICE="eth1" BOOTPROTO="static" IPADDR="10.0.0.1" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet"
ocfs2-tools.x86_64file on each node by typing the following command:
yum install ocfs2-tools.x86_64
In addition, you might want to update the OS at this point as well. You can do this by typing the following command:
ocfs2demowill be used and the cluster will have three nodes configured. We will use the
o2cbcluster registration utility to add the cluster and the nodes as well as to register the cluster and start the heartbeat.
On each of the nodes, run the following
o2cb add-cluster ocfs2demo o2cb add-node --ip 10.0.0.1 --port 7777 --number 1 ocfs2demo ocfs2-1 o2cb add-node --ip 10.0.0.2 --port 7777 --number 2 ocfs2demo ocfs2-2 o2cb add-node --ip 10.0.0.3 --port 7777 --number 3 ocfs2demo ocfs2-3 o2cb register-cluster ocfs2demo o2cb start-heartbeat ocfs2demo
o2cbdriver, which is an interactive configuration utility with several default settings. Begin by typing the following command. You can use the default settings and, when asked to provide the "Cluster to start on boot" information, enter
[root@ocfs2-1 ~]# service o2cb configure Configuring the O2CB driver. This will configure the on-boot properties of the O2CB driver. The following questions will determine whether the driver is loaded on boot. The current values will be shown in brackets (''). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Load O2CB driver on boot (y/n) [y]: Cluster stack backing O2CB [o2cb]: Cluster to start on boot (Enter "none" to clear) [ocfs2demo]: Specify heartbeat dead threshold (>=7) : Specify network idle timeout in ms (>=5000) : Specify network keepalive delay in ms (>=1000) : Specify network reconnect delay in ms (>=2000) : Writing O2CB configuration: OK Loading filesystem "configfs": OK Mounting configfs filesystem at /sys/kernel/config: OK Loading stack plugin "o2cb": OK Loading filesystem "ocfs2_dlmfs": OK Mounting ocfs2_dlmfs filesystem at /dlm: OK Setting cluster stack "o2cb": OK Registering O2CB cluster "ocfs2demo": OK Setting O2CB cluster timeouts : OK
After registering each node, the
/etc/ocfs2/cluster.conf file will look similar to the following when the configuration is complete.
node: name = ocfs2-1 cluster = ocfs2demo number = 1 ip_address = 10.0.0.1 ip_port = 7777 node: name = ocfs2-2 cluster = ocfs2demo number = 2 ip_address = 10.0.0.2 ip_port = 7777 node: name = ocfs2-3 cluster = ocfs2demo number = 3 ip_address = 10.0.0.3 ip_port = 7777 cluster: name = ocfs2demo heartbeat_mode = local node_count = 3
Note: Even though we are defining a three-node cluster in this example, the following command specifies that four node slots be added, which reserves one slot for future expansion. Node slots can be increased at any time, but they can't be removed once they are created. Node slots also consume disk space. For more information about choosing the appropriate number of node slots, see the "OCFS2 Best Practices Guide" (a link is provided in the "See Also" section).
mkfs.ocfs2 -L ocfs2demo --cluster-name=ocfs2demo --fs-feature-level=max-features -N 4
/etc/fstabfile on each node in the cluster:
/dev/sdb1 /ocfs2demo ocfs2 _netdev 0 0
/etc/fstabfile by running the following command:
At this point, if the OCFS2 configuration is correct, the file system will mount and you will see the following output if you run
df -h on each node in the cluster.
[user@ocfs2-1 ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_ocfs21-lv_root 26G 2.5G 22G 11% / tmpfs 1004M 0 1004M 0% /dev/shm /dev/sda1 485M 98M 362M 22% /boot /dev/sdb1 12G 1.3G 11G 11% /ocfs2demo
You can also create a test file on the shared storage, and you will be able to see this test file on each node in the cluster.
I hope you found this article useful.
Robert Chase is a member of the Oracle Linux product management team. He has been involved with Linux and open source software since 1996. He has worked with systems as small as embedded devices and with large supercomputer-class hardware.
|Revision 1.0, 04/14/2014|