Articles
Server and Storage Administration
by Jeff Victor
Published January 2013 (reprinted from Jeff Victor's blog)
Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list.
|
One of the significant new features, and the most significant new feature related to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Oracle Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This article describes and demonstrates the use of ZOSS.
ZOSS provides complete support for an Oracle Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to Fibre Channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past.
In this article, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available—which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.]Developing this article reinforced the lesson that the solution to every lab problem is Oracle VM VirtualBox, which helps here in a couple of important ways. It offers the ability to easily install multiple copies of Oracle Solaris as guests on top of any popular system (for example, Microsoft Windows, Mac OS, Oracle Solaris, Oracle Linux, and other versions of Linux). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or a larger x86 system.
Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers.
In the method I describe below, the virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices.
Why is the migration of virtual servers important? The following are some of the most common reasons:
For ZOSS, the important new concept is named rootzpool. You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store—hard disk(s) or LUN(s)—that will be used to make a ZFS zpool that will hold the zone. This zpool
Here is a brief overview of the steps to create a zone on shared storage and migrate it. The next section shows the detailed commands and output.
zonecfg export to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time—as long as those systems all have access to the shared storage.)The zone can be used normally and even migrated back or to a different system.
The rest of this article shows the commands and output. The two host names are sysA and sysB.
Note: Each Oracle Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device names after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command zpool import or format to discover the device on the "new" host for the zone.
The first steps create the Oracle VM VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them.
pkg install solaris-desktop and take a break while it installs those important things.In the example shown below, I make these assumptions:
sysA.sysB.sysA, the shared disk is named /dev/dsk/c7t2d0.sysB, the shared disk is named /dev/dsk/c7t3d0.Step 1) Determine the name of the disk that will move back and forth between the systems, as shown in Listing 1.
root@sysA:~# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c7t0d0
/pci@0,0/pci8086,2829@d/disk@0,0
1. c7t2d0
/pci@0,0/pci8086,2829@d/disk@2,0
Specify disk (enter its number): ^D
Listing 1. Determining the Name of the Disk
Step 2) The first thing to do is partition and label the disk, as shown in Listing 2. The magic needed to write an EFI label is not overly complicated.
root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format> label ... Specify Label type[1]: 1 Ready to label disk, continue? y format> quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0
Listing 2. Partitioning and Labeling the Disk
Step 3) Configure zone1 on sysA, as shown in Listing 3.
root@sysA:~# zonecfg -z zone1
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
create: Using system default template 'SYSdefault'
zonecfg:zone1> set zonename=zone1
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> add rootzpool
zonecfg:zone1:rootzpool> add storage dev:dsk/c7t2d0
zonecfg:zone1:rootzpool> end
zonecfg:zone1> exit
root@sysA:~# zonecfg -z zone1 info
zonename: zone1
zonepath: /zones/zone1
brand: solaris
autoboot: false
bootargs:
file-mac-profile:
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
fs-allowed:
anet:
...
rootzpool:
storage: dev:dsk/c7t2d0
Listing 3. Configuring zone1 on sysA
Step 4) Install the zone, as shown in Listing 4. This step takes the most time, but you can wander off for a snack or a few laps around the gym—or both! (Just not at the same time...)
root@sysA:~# zoneadm -z zone1 install
Created zone zpool: zone1_rpool
Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install
Image: Preparing at /zones/zone1/root.
AI Manifest: /tmp/manifest.xml.RXaycg
SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
Zonename: zone1
Installation: Starting ...
Creating IPS image
Startup linked: 1/1 done
Installing packages from:
solaris
origin: http://pkg.us.oracle.com/support/
DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 183/183 33556/33556 222.2/222.2 2.8M/s
PHASE ITEMS
Installing new actions 46825/46825
Updating package state database Done
Updating image state Done
Creating fast lookup database Done
Installation: Succeeded
Note: Man pages can be obtained by installing pkg:/system/manual
done.
Done: Installation completed in 1696.847 seconds.
Next Steps: Boot the zone, then log into the zone console (zlogin -C)
to complete the configuration process.
Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install
Listing 4. Installing the Zone
Step 5) Boot the zone:
root@sysA:~# zoneadm -z zone1 boot
Step 6) Log in to zone's console to complete the specification of system information:
root@sysA:~# zlogin -C zone1
Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation.
Step 7) Shut down the zone so it can be "moved":
root@sysA:~# zoneadm -z zone1 shutdown
Step 8) Detach the zone, as shown in Listing 5, so that the original global zone can't use it.
root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool
Listing 5. Detaching the Zone
Step 9) Review the result and shut down sysA, as shown in Listing 6, so that sysB can use the shared disk.
root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0
Listing 6. Shutting Down sysA
Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use zonecfg ... export on sysA, as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the format or zpool import command.
When that is done, you should see the output shown in Listing 7. (I used the same zone name—zone1—in this example, but you can choose any valid zone name you want.)
root@sysB:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- zone1 configured /zones/zone1 solaris excl
root@sysB:~# zonecfg -z zone1 info
zonename: zone1
zonepath: /zones/zone1
brand: solaris
autoboot: false
bootargs:
file-mac-profile:
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
fs-allowed:
anet:
linkname: net0
...
rootzpool:
storage: dev:dsk/c7t3d0
Listing 7. Output After Booting sysB and Configuring a Zone
Step 11) Attach the zone, as shown in Listing 8, which automatically imports the zpool.
root@sysB:~# zoneadm -z zone1 attach
Imported zone zpool: zone1_rpool
Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach
Installing: Using existing zone boot environment
Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris
Cache: Using /var/pkg/publisher.
Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
Updating non-global zone: Auditing packages.
No updates necessary for this image.
Updating non-global zone: Zone updated.
Result: Attach Succeeded.
Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach
root@sysB:~# zoneadm -z zone1 boot
root@sysB:~# zlogin zone1
[Connected to zone 'zone1' pts/2]
Oracle Corporation SunOS 5.11 11.1 September 2012
Listing 8. Attaching the Zone
Step 12) Now let's migrate the zone back to sysA. As shown in Listing 9, create a file in zone1 so we can verify it exists after we migrate the zone back and then begin migrating it back.
root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0
Listing 9. Creating a File in zone1
Step 13) Back on sysA, check the status, as shown in Listing 10.
root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE -
Listing 10. Checking the Status of sysA
Step 14) Reattach the zone back to sysA, as shown in Listing 11.
root@sysA:~# zoneadm -z zone1 attach
Imported zone zpool: zone1_rpool
Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach
Installing: Using existing zone boot environment
Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris
Cache: Using /var/pkg/publisher.
Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
Updating non-global zone: Auditing packages.
No updates necessary for this image.
Updating non-global zone: Zone updated.
Result: Attach Succeeded.
Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach
root@sysA:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE -
zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE -
root@sysA:~# zoneadm -z zone1 boot
root@sysA:~# zlogin zone1
[Connected to zone 'zone1' pts/2]
Oracle Corporation SunOS 5.11 11.1 September 2012
root@zone1:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 1.98G 538M 1.46G 26% 1.00x ONLINE -
Listing 11. Reattaching the Zone
Step 15) Check for the file created on sysB earlier.
root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA
Here is a brief list of some of the fun things you can try next.
zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, and so on.)I hope you have seen the ease with which you can now move Oracle Solaris Zones from one system to another.
zonecfg man pageJeff Victor is a Principal Sales Consultant at Oracle Corporation and was the principal author of the book Solaris 10 System Virtualization Essentials. His expertise in operating systems, virtualization, and resource management, which is based on almost 30 years of UNIX experience, is regularly requested by major corporations. He was the creator of the Solaris Zones FAQ and the zonestat open source program for Oracle Solaris 10.
Jeff is a regular author, contributor, and speaker at corporate and industry events. His blog can be found at http://blogs.oracle.com/jeffv. He received a Bachelor of Science degree from Rensselaer Polytechnic Institute in Troy, New York.
Revision 1.0, 01/08/2013
|