Articles
Server and Storage Administration
by Orgad Kimchi
Published January 2013 (updated February 2013)
|
This article starts with a brief overview of Hadoop and follows with an example of setting up a Hadoop cluster with a NameNode, a secondary NameNode, and three DataNodes. As a prerequisite, you should have a basic understanding of Oracle Solaris Zones and networking administration.
The Apache Hadoop software is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.
The following are benefits of using Oracle Solaris Zones for a Hadoop cluster:
To store data, Hadoop uses the Hadoop Distributed File System (HDFS), which provides high-throughput access to application data and is suitable for applications that have large data sets. For more information about Hadoop and HDFS see http://hadoop.apache.org/.
The Hadoop cluster building blocks are as follows:
In the example presented in this article, all the Hadoop cluster building blocks will be installed using the Oracle Solaris Zones, ZFS, and network virtualization technologies. Figure 1 shows the architecture:

Figure 1. Architecture
For this article, I used the "12 October, 2012 Release 1.0.4" release.
/usr/local directory if it doesn't exist; later, we will share this directory with the zones.root@global_zone:~# mkdir -p /usr/local
/usr/local:root@global_zone:~# cp /tmp/hadoop-1.0.4.tar.gz /usr/local
root@global_zone:~# cd /usr/local root@global_zone:~# gzip -dc /usr/local/hadoop-1.0.4.tar.gz | tar -xvf -
root@global_zone:~# groupadd hadoop
root@global_zone:~# useradd -g hadoop hadoop root@global_zone:~# passwd hadoop
root@global_zone:~# mkdir -p /export/home/hadoop root@global_zone:~# chown hadoop:hadoop /export/home/hadoop/
root@global_zone:~# mv /usr/local/hadoop-1.0.4 /usr/local/hadoop root@global_zone:~# chown -R hadoop:hadoop /usr/local/hadoop*
| Filename | Description |
|---|---|
hadoop-env.sh | Specifies environment variable settings used by Hadoop. |
core-site.xml | Specifies parameters relevant to all Hadoop daemons and clients. |
hdfs-site.xml | Specifies parameters used by the HDFS daemons and clients. |
mapred-site.xml | Specifies parameters used by the MapReduce daemons and clients. |
masters | Contains a list of machines that run the Secondary NameNode. |
slaves | Contains a list of machine names that run the DataNode and TaskTracker pair of daemons. |
To learn more about how the Hadoop framework is controlled by these configuration files, see http://hadoop.apache.org/docs/r1.0.4/api/org/apache/hadoop/conf/Configuration.html.
conf directory:root@global_zone:~# cd /usr/local/hadoop/conf
hadoop-env.sh file to uncomment and change the export lines so they look like the following:export JAVA_HOME=/usr/java export HADOOP_LOG_DIR=/var/log/hadoop
masters file so that it looks like the following:sec-name-node
slaves file so that it looks like the following:data-node1 data-node2 data-node3
core-site.xml file so it looks like the following:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://name-node</value>
</property>
</configuration>
hdfs-site.xml file so it looks like Listing 1:
<configuration>
<property>
<name>dfs.data.dir</name>
<value>/hdfs/data/</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/hdfs/name/</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
Listing 1. hdfs-site.xml File
mapred-site.xml file so it looks like this:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>name-node:8021</value>
</property>
</configuration>
We should ensure that the system clock on the Hadoop zones is synchronized by using the Network Time Protocol (NTP). In this example, the global zone is configured as an NTP server.
Note: It is best to select an NTP server that can be a dedicated time synchronization source so that other services are not negatively affected if the machine is brought down for planned maintenance.
The following example shows how to configure an NTP server.
root@global_zone:~# grep -v ^# /etc/inet/ntp.conf server 127.127.1.0 prefer broadcast 224.0.1.1 ttl 4 enable auth monitor driftfile /var/ntp/ntp.drift statsdir /var/ntp/ntpstats/ filegen peerstats file peerstats type day enable filegen loopstats file loopstats type day enable filegen clockstats file clockstats type day enable keys /etc/inet/ntp.keys trustedkey 0 requestkey 0 controlkey 0 root@global_zone:~# touch /var/ntp/ntp.drift
Listing 2. NTP Server Configuration File
root@global_zone:~# svcadm enable ntp
root@global_zone:~# svcs -a | grep ntp online 16:04:15 svc:/network/ntp:default
Oracle Solaris Zones on the same system can benefit from very high network I/O throughput (up to four times faster) with very low latency compared to systems with, say, 1 Gb physical network connections. For a Hadoop cluster, this means that the DataNodes can replicate the HDFS blocks much faster.
For more information about network virtualization benchmarks, see "How to Control Your Application's Network Bandwidth."
Create a series of virtual network interfaces (VNICs) for the different zones:
root@global_zone:~# dladm create-vnic -l net0 name_node1 root@global_zone:~# dladm create-vnic -l net0 secondary_name1 root@global_zone:~# dladm create-vnic -l net0 data_node1 root@global_zone:~# dladm create-vnic -l net0 data_node2 root@global_zone:~# dladm create-vnic -l net0 data_node3
root@global_zone:~# zfs create -o mountpoint=/zones rpool/zones
name-node zone, as shown in Listing 3:root@global_zone:~# zonecfg -z name-node Use 'create' to begin configuring a new zone. Zonecfg:name-node> create create: Using system default template 'SYSdefault' zonecfg:name-node> set autoboot=true zonecfg:name-node> set zonepath=/zones/name-node zonecfg:name-node> add fs zonecfg:name-node:fs> set dir=/usr/local/hadoop zonecfg:name-node:fs> set special=/usr/local/hadoop zonecfg:name-node:fs> set type=lofs zonecfg:name-node:fs> set options=[ro,nodevices] zonecfg:name-node:fs> end zonecfg:name-node> add net zonecfg:name-node:net> set physical=name_node1 zonecfg:name-node:net> end zonecfg:name-node> verify zonecfg:name-node> exit
Listing 3. Creating the name-node Zone
sec-name-node zone, as shown in Listing 4:root@global_zone:~# zonecfg -z sec-name-node Use 'create' to begin configuring a new zone. Zonecfg:sec-name-node> create create: Using system default template 'SYSdefault' zonecfg:sec-name-node> set autoboot=true zonecfg:sec-name-node> set zonepath=/zones/sec-name-node zonecfg:sec-name-node> add fs zonecfg:sec-name-node:fs> set dir=/usr/local/hadoop zonecfg:sec-name-node:fs> set special=/usr/local/hadoop zonecfg:sec-name-node:fs> set type=lofs zonecfg:sec-name-node:fs> set options=[ro,nodevices] zonecfg:sec-name-node:fs> end zonecfg:sec-name-node> add net zonecfg:sec-name-node:net> set physical=secondary_name1 zonecfg:sec-name-node:net> end zonecfg:sec-name-node> verify zonecfg:sec-name-node> exit
Listing 4. Creating the sec-name-node Zone
In this step, we can leverage the integration between Oracle Solaris Zones virtualization technology and the ZFS file system that is built into Oracle Solaris.
Hadoop best practice is to use a separate hard disk for each DataNode. Therefore, every DataNode zone will have its own hard disk in order to provide better I/O distribution, as shown in Figure 2.

Figure 2. Separate Disk for Each DataNode
Table 2 shows a summary of the Hadoop zones configuration we will create:
Table 2. Zone Summary| Function | Zone Name | ZFS Mount Point | VNIC Name | IP Address |
|---|---|---|---|---|
| NameNode | name-node | /zones/name-node | name_node1 | 192.168.1.1 |
| Secondary NameNode | sec-name-node | /zones/sec-name-node | secondary_name1 | 192.168.1.2 |
| DataNode | data-node1 | /zones/data-node1 | data_node1 | 192.168.1.3 |
| DataNode | data-node2 | /zones/data-node2 | data_node2 | 192.168.1.4 |
| DataNode | data-node3 | /zones/data-node3 | data_node3 | 192.168.1.5 |
root@global_zone:~# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c7t0d0 <LSI-MR9261-8i-2.50-135.97GB>
/pci@0,0/pci8086,340a@3/pci1000,9263@0/sd@0,0
1. c7t1d0 <LSI-MR9261-8i-2.50-135.97GB>
/pci@0,0/pci8086,340a@3/pci1000,9263@0/sd@1,0
2. c7t2d0 <LSI-MR9261-8i-2.50-135.97GB>
/pci@0,0/pci8086,340a@3/pci1000,9263@0/sd@2,0
root@global_zone:~# zpool create -O compression=on data-node1-pool c7t0d0 root@global_zone:~# zpool create -O compression=on data-node2-pool c7t1d0 root@global_zone:~# zpool create -O compression=on data-node3-pool c7t2d0 root@global_zone:~# zfs create -o mountpoint=/zones/data-node1 data-node1-pool/data-node1 root@global_zone:~# zfs create -o mountpoint=/zones/data-node2 data-node2-pool/data-node2 root@global_zone:~# zfs create -o mountpoint=/zones/data-node3 data-node3-pool/data-node3
root@global_zone:~# chmod 700 /zones/data-node1 root@global_zone:~# chmod 700 /zones/data-node2 root@global_zone:~# chmod 700 /zones/data-node3
zonecfg zonepath property to the ZFS file systems that you created, as shown in Listing 5.root@global_zone:~# zonecfg -z data-node1 Use 'create' to begin configuring a new zone. zonecfg:data-node1> create create: Using system default template 'SYSdefault' zonecfg:data-node1> set autoboot=true zonecfg:data-node1> set zonepath=/zones/data-node1 zonecfg:data-node1> add fs zonecfg:data-node1:fs> set dir=/usr/local/hadoop zonecfg:data-node1:fs> set special=/usr/local/hadoop zonecfg:data-node1:fs> set type=lofs zonecfg:data-node1:fs> set options=[ro,nodevices] zonecfg:data-node1:fs> end zonecfg:data-node1> add net zonecfg:data-node1:net> set physical=data_node1 zonecfg:data-node1:net> end zonecfg:data-node1> verify zonecfg:data-node1> commit zonecfg:data-node1> exit root@global_zone:~# zonecfg -z data-node2 Use 'create' to begin configuring a new zone. zonecfg:data-node2> create create: Using system default template 'SYSdefault' zonecfg:data-node1> set autoboot=true zonecfg:data-node2> set zonepath=/zones/data-node2 zonecfg:data-node2> add fs zonecfg:data-node2:fs> set dir=/usr/local/hadoop zonecfg:data-node2:fs> set special=/usr/local/hadoop zonecfg:data-node2:fs> set type=lofs zonecfg:data-node2:fs> set options=[ro,nodevices] zonecfg:data-node2:fs> end zonecfg:data-node2> add net zonecfg:data-node2:net> set physical=data_node2 zonecfg:data-node2:net> end zonecfg:data-node2> verify zonecfg:data-node2> commit zonecfg:data-node2> exit root@global_zone:~# zonec-g -z data-node3 Use 'create' to begin configuring a new zone. zonecfg:data-node3> create create: Using system default template 'SYSdefault' zonecfg:data-node1> set autoboot=true zonecfg:data-node3> set zonepath=/zones/data-node3 zonecfg:data-node3> add fs zonecfg:data-node3:fs> set dir=/usr/local/hadoop zonecfg:data-node3:fs> set special=/usr/local/hadoop zonecfg:data-node3:fs> set type=lofs zonecfg:data-node3:fs> set options=[ro,nodevices] zonecfg:data-node3:fs> end zonecfg:data-node3> add net zonecfg:data-node3:net> set physical=data_node3 zonecfg:data-node3:net> end zonecfg:data-node3> verify zonecfg:data-node3> commit zonecfg:data-node3> exit
Listing 5. Setting the zonecfg zonepath Property
name-node zone; later we will clone it in order to accelerate zone creation time.
root@global_zone:~# zoneadm -z name-node install
The following ZFS file system(s) have been created:
rpool/zones/name-node
Progress being logged to /var/log/zones/zoneadm.20130106T134835Z.name-node.install
Image: Preparing at /zones/name-node/root.
name-node zone and check the status of the zones we've created, as shown in Listing 6:root@global_zone:~# zoneadm -z name-node boot root@global_zone:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 name-node running /zones/name-node solaris excl - sec-name-node configured /zones/sec-name-node solaris excl - data-node1 configured /zones/data-node1 solaris excl - data-node2 configured /zones/data-node2 solaris excl - data-node3 configured /zones/data-node3 solaris excl root@global_zone:~# zlogin -C name-node
Listing 6. Booting the name-node Zone
name-node zone:name-node.name_node1 has an IP address of 192.168.1.1/24.In this article, we will use /etc/hosts for name resolution, so we won't set up DNS for host name resolution.
root@name-node:~# pkg install jdk-6
root@name-node:~# which java /usr/bin/java root@name-node:~# java -version java version "1.6.0_35" Java(TM) SE Runtime Environment (build 1.6.0_35-b10) Java HotSpot(TM) Server VM (build 20.10-b01, mixed mode)
name-node zone:root@name-node:~# groupadd hadoop root@name-node:~# useradd -g hadoop hadoop root@name-node:~# passwd hadoop root@name-node:~# mkdir -p /export/home/hadoop root@name-node:~# chown hadoop:hadoop /export/home/hadoop
root@name-node:~# mkdir /var/log/hadoop root@name-node:~# chown hadoop:hadoop /var/log/hadoop
root@name-node:~# pkg install ntp
root@name-node:~# grep -v ^# /etc/inet/ntp.conf server global_zone prefer slewalways yes disable pll
root@name-node:~# svcadm enable ntp
/etc/hosts:root@name-node:~# cat /etc/hosts ::1 localhost 127.0.0.1 localhost loghost 192.168.1.1 name-node 192.168.1.2 sec-name-node 192.168.1.3 data-node1 192.168.1.4 data-node2 192.168.1.5 data-node3
Note: If you are using the global zone as an NTP server, you must also add its host name and IP address to /etc/hosts.
name_node zone in order to enable password-less login to the Secondary DataNode and the DataNodes:root@name-node # su - hadoop hadoop@name-node $ ssh-keygen -t dsa -P "" -f ~/.ssh/id_dsa hadoop@name-node $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$HOME/.profile:hadoop@name-node $ cat $HOME/.profile # Set JAVA_HOME export JAVA_HOME=/usr/java # Add Hadoop bin/ directory to PATH export PATH=$PATH:/usr/local/hadoop/bin
hadoop@name-node $ hadoop version Hadoop 1.0.4 Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290 Compiled by hortonfo on Wed Oct 3 05:13:58 UTC 2012 From source with checksum fe2baea87c4c81a2c505767f3f9b71f4
root@name-node:~# mkdir /var/log/hadoop root@name-node:~# chown hadoop:hadoop /var/log/hadoop
sec-name-node zone as a clone of name-node:root@global_zone:~# zoneadm -z name-node shutdown root@global_zone:~# zoneadm -z sec-name-node clone name-node
sec-name-node zone:root@global_zone:~# zoneadm -z sec-name-node boot root@global_zone:~# zlogin -C sec-name-node
sec-name-node zone:sec-name-node.secondary_name1.none.data-node1, data-node2, and data-node3:data-node1:root@global_zone:~# zoneadm -z data-node1 clone name-node root@global_zone:~# zoneadm -z data-node1 boot root@global_zone:~# zlogin -C data-node1
data-node1.data_node1.none.data-node2:root@global_zone:~# zoneadm -z data-node2 clone name-node root@global_zone:~# zoneadm -z data-node2 boot root@global_zone:~# zlogin -C data-node2
data-node2.data_node2.none.data-node3:root@global_zone:~# zoneadm -z data-node3 clone name-node root@global_zone:~# zoneadm -z data-node3 boot root@global_zone:~# zlogin -C data-node3
data-node3.data_node3.none.name_node zone:root@global_zone:~# zoneadm -z name-node boot
root@global_zone:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 10 sec-name-node running /zones/sec-name-node solaris excl 12 data-node1 running /zones/data-node1 solaris excl 14 data-node2 running /zones/data-node2 solaris excl 16 data-node3 running /zones/data-node3 solaris excl 17 name-node running /zones/name-node solaris excl
To verify that you SSH access without using a password for the Hadoop user, do the following.
name_node, log in via SSH into the name-node (that is, to itself):hadoop@name-node $ ssh name-node The authenticity of host 'name-node (192.168.1.1)' can't be established. RSA key fingerprint is 04:93:a9:e0:b7:8c:d7:8b:51:b8:42:d7:9f:e1:80:ca. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'name-node,192.168.1.1' (RSA) to the list of known hosts.
sec-name-node and the DataNodes. When you try to log in with SSH the second time, you shouldn't get any prompt to add the host to the known keys list.Verify that all the Hadoop zones have the following host entries in /etc/hosts:
# cat /etc/hosts ::1 localhost 127.0.0.1 localhost loghost 192.168.1.1 name-node 192.168.1.2 sec-name-node 192.168.1.3 data-node1 192.168.1.4 data-node2 192.168.1.5 data-node3
Note: If you are using the global zone as an NTP server, you must also add its host name and IP address to /etc/hosts.
root@name-node:~# mkdir -p /hdfs/name root@name-node:~# chown -R hadoop:hadoop /hdfs/ root@name-node:~# su - hadoop hadoop@name-node:$ /usr/local/hadoop/bin/hadoop namenode -format 13/01/06 20:00:32 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = name-node/192.168.1.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 1.0.4 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 ************************************************************/
Listing 7. Formatting the HDFS File System
data-node1, data-node2, and data-node3), create a Hadoop data directory to store the HDFS blocks: root@data-node1:~# mkdir -p /hdfs/data root@data-node1:~# chown -R hadoop:hadoop /hdfs/ root@data-node2:~# mkdir -p /hdfs/data root@data-node2:~# chown -R hadoop:hadoop /hdfs/ root@data-node3:~# mkdir -p /hdfs/data root@data-node3:~# chown -R hadoop:hadoop /hdfs/
Table 3 describes the startup scripts.
Table 3. Startup Scripts| Filename | Description |
|---|---|
start-dfs.sh | Starts the Hadoop DFS daemons, the NameNode, and the DataNodes. Use this before start-mapred.sh. |
stop-dfs.sh | Stops the Hadoop DFS daemons. |
start-mapred.sh | Starts the Hadoop MapReduce daemons, the JobTracker, and TaskTrackers. |
stop-mapred.sh | Stops the Hadoop MapReduce daemons. |
name-node zone, start the Hadoop DFS daemons, the NameNode, and the DataNodes using the following command:hadoop@name-node:$ start-dfs.sh starting namenode, logging to /var/log/hadoop/hadoop--namenode-name-node.out data-node2: starting datanode, logging to /var/log/hadoop/hadoop-hadoop-datanode-data-node2.out data-node1: starting datanode, logging to /var/log/hadoop/hadoop-hadoop-datanode-data-node1.out data-node3: starting datanode, logging to /var/log/hadoop/hadoop-hadoop-datanode-data-node3.out sec-name-node: starting secondarynamenode, logging to /var/log/hadoop/hadoop-hadoop-secondarynamenode-sec-name-node.out
hadoop@name-node:$ start-mapred.sh starting jobtracker, logging to /var/log/hadoop/hadoop--jobtracker-name-node.out data-node1: starting tasktracker, logging to /var/log/hadoop/hadoop-hadoop-tasktracker-data-node1.out data-node3: starting tasktracker, logging to /var/log/hadoop/hadoop-hadoop-tasktracker-data-node3.out data-node2: starting tasktracker, logging to /var/log/hadoop/hadoop-hadoop-tasktracker-data-node2.out
hadoop@name-node:$ hadoop dfsadmin -report Configured Capacity: 171455269888 (159.68 GB) Present Capacity: 169711053357 (158.06 GB) DFS Remaining: 169711028736 (158.06 GB) DFS Used: 24621 (24.04 KB) DFS Used%: 0% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 3 (3 total, 0 dead) ...
Listing 8. Checking the Cluster Status
You can find the same information on the NameNode Web status page at http://<namenode IP address>:50070/dfshealth.jsp.

Figure 3. Cluster Summary
MapReduce is a framework for processing parallelizable problems across huge data sets using a cluster of computers. For more information about MapReduce, see http://en.wikipedia.org/wiki/MapReduce.
We will use the WordCount example, which reads text files and counts how often words occur. The input and output consist of text files, each line of which contains a word and the number of times the word occurred, separated by a tab. For more information about WordCount, see http://wiki.apache.org/hadoop/WordCount.
/tmp/data: The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomson.hadoop@name-node:$ hadoop dfs -copyFromLocal /tmp/data/ /hdfs/data
hadoop@name-node:$ hadoop dfs -ls /hdfs/data Found 1 items -rw-r--r-- 3 hadoop supergroup 661664 2013-01-07 19:45 /hdfs/data/20417-8.txt
hadoop@name-node:$ hadoop jar /usr/local/hadoop/hadoop-examples-1.0.4.jar wordcount /hdfs/data /hdfs/data/data-output 13/01/07 15:20:21 INFO input.FileInputFormat: Total input paths to process : 1 13/01/07 15:20:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 13/01/07 15:20:21 WARN snappy.LoadSnappy: Snappy native library not loaded 13/01/07 15:20:22 INFO mapred.JobClient: Running job: job_201301071307_0006 13/01/07 15:20:23 INFO mapred.JobClient: map 0% reduce 0% 13/01/07 15:20:38 INFO mapred.JobClient: map 100% reduce 0% 13/01/07 15:20:50 INFO mapred.JobClient: map 100% reduce 100% 13/01/07 15:20:55 INFO mapred.JobClient: Job complete: job_201301071307_0006 13/01/07 15:20:55 INFO mapred.JobClient: Counters: 26 13/01/07 15:20:55 INFO mapred.JobClient: Job Counters
Listing 9. Starting the MapReduce Job
hadoop@name-node:$ hadoop dfs -ls /hdfs/data/data-output Found 3 items -rw-r--r-- 3 hadoop supergroup 0 2013-01-07 15:20 /hdfs/data/data-output/_SUCCESS drwxr-xr-x - hadoop supergroup 0 2013-01-07 15:20 /hdfs/data/data-output/_logs -rw-r--r-- 3 hadoop supergroup 196288 2013-01-07 15:20 /hdfs/data/data-output/part-r-00000
root@name-node:~# zfs create -o encryption=on rpool/export/output Enter passphrase for 'rpool/export/output': Enter again:
root@name-node:~# chown hadoop:hadoop /export/output/
root@name-node:~# su - hadoop Oracle Corporation SunOS 5.11 11.1 September 2012 hadoop@name-node:$ hadoop dfs -getmerge /hdfs/data/data-output /export/output/
hadoop@name-node:$ head /export/output/data-output "A 2 "Alpha 1 "Alpha," 1 "An 2 "And 1 "BOILING" 2 "Batesian" 1 "Beta 2
root@name-node:~# zfs key -u rpool/export/output
If the command is successful, the data set is not accessible and it is unmounted.
If you want to mount this ZFS file system, you need to provide the passphrase:
root@name-node:~# zfs mount rpool/export/output Enter passphrase for 'rpool/export/output':
By using a passphrase, you ensure that only those who know the passphrase can observe the output file. For more information about ZFS encryption, see "How to Manage ZFS Data Encryption."
In this article, we saw how we can leverage Oracle Solaris Zones, ZFS, and network virtualization technologies to build a multinode Hadoop cluster.
Orgad Kimchi is a principle software engineer on the ISV Engineering team at Oracle (formerly Sun Microsystems). For 5 years he has specialized in virtualization and cloud computing technologies.
| Revision 1.2, 02/01/2013 |