Getting Started with OpenStack on Oracle Solaris 11.2

by David Comay

From zero to a full private cloud in minutes.


Published April 2014 (updated June 2014)


Introduction

Want to comment on this article? Post the link on Facebook's OTN Garage page.  Have a similar article to share? Bring it up on Facebook or Twitter and let's discuss.

Oracle Solaris 11.2 provides a complete OpenStack distribution. OpenStack, the popular open source cloud computing software enjoying widespread industry involvement, provides comprehensive self-service environments for sharing and managing compute, network, and storage resources in the data center through a centralized web-based portal. It is integrated into all the core technology foundations of Oracle Solaris 11, so you can now set up an enterprise-ready private cloud infrastructure as a service (IaaS) environment in minutes.

Figure 1

Figure 1. The points of integration between Oracle Solaris and OpenStack

In Oracle Solaris 11.2, the OpenStack Grizzly 2013.1.4 release has been made available through the Oracle Solaris Image Packaging System's package repository. Using the available packages, you can deploy any of the following OpenStack services on the system, which are tightly integrated with the rest of Oracle Solaris:

  • Nova—compute virtualization using Oracle Solaris non-global zones as well as the new kernel zones.
  • Neutron—network virtualization through the use of the new Elastic Block Storage capability in Oracle Solaris 11.2.
  • Cinder—block storage virtualization using ZFS. Block storage volumes can be made available to local compute nodes or they can be made available remotely via iSCSI.
  • Glance—image virtualization using the new Unified Archive feature of Oracle Solaris 11.2.
  • Horizon—the standard OpenStack dashboard where the cloud infrastructure can be managed.
  • Keystone—the standard OpenStack authentication service.

This document is not meant to be an exhaustive source of information on OpenStack but rather one focused on OpenStack with Oracle Solaris 11.2. Additional information can be found in the OpenStack documentation, which is available at openstack.org.

Additional information about OpenStack on Solaris can be found on the OpenStack Java.net project page and via the project's mailing lists.

The easiest way to start using OpenStack on Oracle Solaris is to download and install the Oracle Solaris 11.2 Beta with OpenStack Unified Archive, which provides a convenient way of getting started with OpenStack in about ten minutes. All six of the essential OpenStack services are preinstalled and preconfigued to make setting up OpenStack on a single system easy. After installation and a small amount of customization, virtual machines (VMs), otherwise known as Nova instances, can be created, assigned block storage, attached to virtual networks, and then managed through an easy-to-use web browser interface. The Unified Archive is preloaded with a pair of Glance images, one suitable for use with non-global zones and the other for kernel zones (solaris-kz branded zones). In addition, through the use of the new archiveadm(1M) command, new archives can be created from global, non-global, and kernel zones running Oracle Solaris 11.2 and then uploaded to the Glance repository for use with OpenStack.

An alternate method of installation, useful for doing multisystem configurations, is to install the OpenStack packages yourself. This installation method will also take roughly ten minutes to complete, although the time for configuration will vary depending on the services deployed. One advantage of this method is that it allows an administrator to install only the OpenStack services necessary on each specific node. Installation can be done manually using the pkg(1) command or by specifying the desired packages through the use of an Oracle Solaris Automated Installer manifest using a network-based installation, as outlined in "Installing Using an Install Server." Table 1 shows the packages that are included at this time:

Table 1. Included Packages
Package Name Package Description
pkg:/cloud/openstack/nova OpenStack Nova provides a cloud computing fabric controller that supports a wide variety of virtualization technologies. In addition to its native API, it includes compatibility with the commonly encountered Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3) APIs.
pkg:/cloud/openstack/cinder OpenStack Cinder provides an infrastructure for managing block storage volumes in OpenStack. It allows block devices to be exposed and connected to compute instances for expanded storage, better performance, and integration with enterprise storage platforms.
pkg:/cloud/openstack/neutron OpenStack Neutron provides an API to dynamically request and configure virtual networks. These networks connect "interfaces" from other OpenStack services (for example, VNICs from Nova VMs). The Neutron API supports extensions to provide advanced network capabilities, for example, quality of service (QoS), access control lists (ACLs), network monitoring, and so on.
pkg:/cloud/openstack/glance OpenStack Glance provides services for discovering, registering, and retrieving virtual machine images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image. VM images made available through Glance can be stored in a variety of locations from simple file systems to object-storage systems such as OpenStack Swift.
pkg:/cloud/openstack/keystone OpenStack Keystone is the OpenStack identity service used for authentication between the OpenStack services.
pkg:/cloud/openstack/horizon OpenStack Horizon is the canonical implementation of OpenStack's dashboard, which provides a web-based user interface to OpenStack services including Nova, Swift, Keystone, and so on.

In addition, as a convenience, the group package, pkg:/cloud/openstack, can be installed in a similar manner to automatically install all six components.

OpenStack on Oracle Solaris 11.2 does not have any special system requirements other than those spelled out for Oracle Solaris itself. Additional CPUs, memory, and disk space might be required, however, to support more than a trivial number of Nova instances. For more information, see the Oracle Solaris 11.2 Beta System Requirements for general system requirements.

Using the Unified Archive Method for Installation

In order to use the Unified Archive method of installation, a suitable target is necessary. This is typically a bare-metal system that can be installed via the Automated Installer, or it can be a kernel zone. Although the Unified Archive can, in theory, be installed inside a non-global zone, the Nova compute virtualization in Oracle Solaris does not support nested non-global zones. As such, using the manual package-based installation method is recommend for those deployments. Services that would be suitable for making available within non-global zones include Keystone, Glance, and Horizon.

Detailed instructions for both methods of installation are included in the README file associated with the archive. Refer to that for more detailed information, but briefly, the Unified Archive can be deployed using a variety of methods:

  • Bare-metal installation using an Automated Installer network service
  • Bare-metal installation using a USB image generated from the Unified Archive using archiveadm(1M)
  • Indirect installation using the Oracle Solaris Automated Installer Boot Image combined with the Unified Archive
  • Direct installation into a kernel zone using the standard zonecfg(1M) and zoneadm(1M) commands

The first two methods are most useful for doing direct system installations, while the third method can be used to install the archive into an Oracle VM VirtualBox instance. Finally, the last method allows you to create a kernel zone installed with the archive using just two commands.

Using the last method, to install the Unified Archive within a kernel zone, simply create a new kernel zone and then supply the path to the downloaded archive as part of the zone installation command, for example:

global# zonecfg -z openstack create -t SYSsolaris-kz
global# zoneadm -z openstack install -a /path/to/downloaded/archive.uar

At this point, the archive will be installed inside a new kernel zone named openstack. To get started, the new zone should be booted and configured through the zone's console:

global# zoneadm -z openstack boot
global# zlogin -C openstack

If nothing appears on the console immediately, press either Enter or Control-L to redraw the screen.

Once the new system has been installed, booted, and then configured, the Elastic Virtual Switch should be configured. This primarily consists of creating a set of public SSH keys for the root, evsuser, and neutron UNIX users and then appending those public keys to the authorized_keys file for the evsuser UNIX user: /var/user/evsuser/.ssh/authorized_keys. Elastic Virtual Switch requires some additional configuration, such as which sort of virtual LAN technology to use (VLAN or VXLAN) and the corresponding IDs or segments. To ease the automation of creating these keys and performing the configuration, a script is supplied under /usr/demo/openstack/configure_evs.py that can be used to finalize the rest of the OpenStack and Elastic Virtual Switch configuration.

Note that when using the Unified Archive installation method, the default Horizon instance is not enabled with Transport Layer Security (TLS). To enable TLS in this configuration, uncomment the following lines in /etc/openstack_dashboard/local_settings.py:

SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')

CSRF_COOKIE_SECURE = True

SESSION_COOKIE_SECURE = True

from horizon.utils import secret_key
SECRET_KEY = secret_key.generate_or_read_from_file(os.path.join(LOCAL_PATH, '.secret_key_store'))

In addition, X.509 certificates need to be installed and the Apache configuration for Horizon needs to be adjusted to account for that. The Horizon configuration step in the next section has additional details on the certificate configuration.

Using the Manual Package Installation Method

The Oracle Solaris Image Packaging System packages listed in Table 1 can be installed individually or as a group on one or more systems. Once installed, some configuration steps are required to get started. See Appendix A: Common Configuration Parameters for OpenStack for more information on the most common parameters that need to be set.

In general, for a manual installation, the following order of steps is recommended:

  1. Install and enable the RabbitMQ service:

    RabbitMQ provides support for the Advanced Message Queuing Protocol (AMQP), which is used for communication between all OpenStack services. Generally, a single node in the cloud is configured to run RabbitMQ.

    global# pkg install rabbitmq
    global# svcadm enable rabbitmq
    
  2. Customize the Keystone configuration, if desired:

    Edit /etc/keystone/keystone.conf and then enable the service.

    global# svcadm enable keystone
    global# su - keystone -c "keystone-manage pki_setup"
    
  3. Populate the Keystone database:

    This can be done manually or by using the supplied convenience script.

    global# su - keystone -c /usr/demo/openstack/keystone/sample_data.sh
    
  4. Customize the Cinder configuration, if desired:

    Edit /etc/cinder/api-paste.ini and /etc/cinder/cinder.conf and then enable the services. If you wish to use iSCSI for connectivity between your Nova instances and the back-end storage, change the volume_driver option in /etc/cinder/cinder.conf to cinder.volume.drivers.solaris.zfs.ZFSVolumeDriver.

    global# svcadm enable cinder-db
    global# svcadm enable cinder-volume:setup
    global# svcadm enable cinder-api cinder-scheduler cinder-volume:default
    
  5. Customize the Glance configuration, if desired:

    Edit /etc/glance/glance-api.conf, /etc/glance/glance-cache.conf, /etc/glance/glance-registry.conf, and /etc/glance/glance-scrubber.conf, and then enable the services.

    global# svcadm enable glance-db
    global# svcadm enable glance-api glance-registry glance-scrubber
    
  6. Create SSH public keys:

    Create keys for the evsuser, neutron, and root users and append them to authorized_keys file for evsuser.

    global# su - evsuser -c "ssh-keygen -N '' -f /var/user/evsuser/.ssh/id_rsa -t rsa"
    global# su - neutron -c "ssh-keygen -N '' -f /var/lib/neutron/.ssh/id_rsa -t rsa"
    global# su - root -c "ssh-keygen -N '' -f /root/.ssh/id_rsa -t rsa"
    global# cat /var/user/evsuser/.ssh/id_rsa.pub /var/lib/neutron/.ssh/id_rsa.pub \
    /root/.ssh/id_rsa.pub >> /var/user/evsuser/.ssh/authorized_keys
    
  7. Verify SSH:

    For the same three accounts, verify that SSH connectivity is working correctly by using ssh(1) to connect as evsuser@localhost. For these initial three SSH connections, answer yes to the question about wanting to continue to connect.

    global# su - evsuser -c "ssh evsuser@localhost true"
    global# su - neutron -c "ssh evsuser@localhost true"
    global# su - root -c "ssh evsuser@localhost true"
    
  8. Customize the Neutron configuration, if desired:

    Edit /etc/neutron/quantum.conf, /etc/quantum/plugins/evs/evs_plugin.ini, and /etc/quantum/dhcp_agent.ini, setting the address of the Elastic Virtual Switch controller, and then enable the services.

    global# pkg install rad-evs-controller pip markupsafe
    global# svcadm restart rad:local
    global# evsadm set-prop -p controller=ssh://evsuser@localhost
    global# svcadm enable neutron-server neutron-dhcp-agent
    
  9. Customize the Nova configuration, if desired:

    Edit /etc/nova/api-paste.ini and /etc/nova/nova.conf and then enable the services.

    global# svcadm enable nova-conductor
    global# svcadm restart rad:local
    global# svcadm enable nova-api-ec2 nova-api-osapi-compute nova-cert nova-compute nova-objectstore nova-scheduler
    
  10. Customize Horizon:

    First, customize the Horizon configuration by copying either openstack-dashboard-http.conf or openstack-dashboard-tls.conf from /etc/apache2/2.2/samples-conf.d into the Apache /etc/apache2/2.2/conf.d directory. If TLS is going to be enabled, then the appropriate certificates need to be generated and installed and then the /etc/apache2/2.2/conf.d/openstack-dashboard-tls.conf file needs to be edited to reflect the location of the installed certificates.

    For more information, see the comment in the configuration file and the URLs referenced there. Finally, the default Apache instance should be enabled or restarted.

    global# cp /etc/apache2/2.2/samples-conf.d/openstack-dashboard-http.conf /etc/apache2/2.2/conf.d
    global# svcadm enable apache22
    

    or

    global# cp /etc/apache2/2.2/samples-conf.d/openstack-dashboard-tls.conf /etc/apache2/2.2/conf.d
    global# svcadm enable apache22
    

Booting Your First Nova Instance

After the OpenStack installation is complete and you have enabled the desired OpenStack services (this is mostly taken care of already if you used the Unified Archive), you can log in to the OpenStack dashboard (Horizon) to examine the system and get started with provisioning a trial virtual machine.

To log in to Horizon, point the browser to https://<mysystem>/horizon where mysystem is the name of system that is running the Horizon service under the Apache web service. If you used the Unified Archive installation method or if you used the supplied /usr/demo/openstack/keystone/sample_data.sh shell script for the manual installation method, the default cloud administrator login is admin with a password of secrete.

Figure 2

Figure 2. The OpenStack Horizon login screen

When you log in as the cloud administrator, there are two panels on the left side of the screen. The rightmost panel (Admin) is the default and is the administrator view. It allows you to see an overall view of the Nova instances and Cinder volumes in use within the cloud. It also allows you to view and edit the Flavor definitions that define virtual machine characteristics, such as the number of virtual CPUs, the amount of memory, and the disk space assigned to a VM. On Oracle Solaris, this is also where the brand of the underlying Oracle Solaris Zone is defined, such as solaris for non-global zones and solaris-kz for kernel zones. Finally, from a system provisioning perspective, this panel also allows you to create virtual networks and routers for use by cloud users.

Figure 3

Figure 3. The OpenStack Horizon dashboard showing the administration panel

The other primary elements that the cloud administrator can view and edit concern projects (also known as tenants) and users. Projects provide a mechanism to group and isolate ownership of virtual computing resources and users, while users are the persons or services that use those resources in the cloud.

The leftmost panel of the OpenStack dashboard (Project) shows the project the user is using. For the admin user, this would be the demo project. Clicking the panel provides a set of options a cloud user can perform as a user under this project. If the Unified Archive method of installation was used, clicking Images & Snapshots will reveal that the Glance service has been prepopulated with two images: one for non-global zone-based instances and the other for kernel zone-based instances. And under Access & Security, users can upload their own personal SSH public key to the Nova service. This public key is automatically placed in the root user authorized_keys file in the new instance, which allows a user to log in to the instance remotely.

To create a new instance, a cloud user (the admin or any multitenant user) simply needs to click Instances under Manage Compute. Clicking Launch Instance on the right side produces a dialog box where the cloud user can specify the type of image (by default, non-global zone or kernel zone are the choices), the name of the new instance and, finally, the flavor of the instance. The latter should match the zone type specified in the image type and the size chosen should reflect the requirements of the intended workload.

Under the Access & Security tab in the dialog box, you can choose which uploaded SSH keypair to install in the new instance to be created; and under the Network tab, you can choose which network the instance should be attached to. Finally, clicking Launch causes the instance to be created, installed, and then booted. The time required for a new instance to be made available depends on a number of factors including the size of the images, the resources provided in the flavor definition chosen, and where OpenStack has placed the root file system of the new instance.

In the Instances screen, you can click the name of the instance to see general information as well view the instance's console log. By reloading this particular page, you can see updates that have taken place.

Note that by clicking the Volumes label on the left side of the screen, you can see the Cinder volumes that have been created. Generally, each instance will have at least one volume assigned to it and displayed here. In a multinode configuration, this volume might be remote from the instance using a protocol such as iSCSI or Fibre Channel. Instances that are made of non-global zones have a volume assigned only if the volume is on a different node in the cloud.

Finally, by clicking the Network Topology label on the left side, you can see a visual representation of the cloud network including all subnet segments, virtual routers, and active instances.

Adding Images to Glance

If you used the OpenStack Unified Archive to get an OpenStack instance up and running, you will have noticed that it was preloaded with a pair of Glance images, one suitable for use with non-global zones and the other for kernel zones. If you set up OpenStack manually, you need to add to Glance images that can be used with your first Nova instance. Unified Archives are the image format that is used for Oracle Solaris OpenStack. You can use the images Oracle has made available or create your own.

The following shows how to capture a Unified Archive of a running non-global zone called myzone, and then upload it to the Glance repository (depending on your configuration):

global# zonecfg -z myzone create
global# zoneadm -z myzone install
global# sed /^PermitRootLogin/s/no$/without-password/ \
< /etc/ssh/sshd_config > /system/volatile/sed.$$
global# cp /system/volatile/sed.$$ /etc/ssh/sshd_config
global# archiveadm create -z myzone /var/tmp/myzone.uar
global# glance --os-username glance --os-password glance \
  --os-tenant-name service --os-auth-url http://localhost:5000/v2.0 \
  image-create --container-format bare --disk-format raw \
  --is-public true --name "Oracle Solaris 11.2 NGZ" \
  --property architecture=x86_64 \
  --property hypervisor_type=solariszones \
  --property vm_mode=zones < /var/tmp/myzone.uar

This image can then be seen in the Images & Snapshots screen.

Figure 4

Figure 4. The Oracle Solaris 11.2 non-global zone available in Images & Snapshots through Glance

Appendix A: Common Configuration Parameters for OpenStack

Each OpenStack service has many configuration options available through its configuration file. Some of these options are for features not supported on Oracle Solaris or for vendor-specific drivers. The OpenStack community documentation referred to earlier is the definitive source for the non-Oracle Solaris configuration parameters, but some of the most common parameters to adjust in either a single-node or multinode configuration are shown in Table 2.

Table 2. Common Parameters
Configuration File Option Default Value Common Alternate Values
/etc/cinder/api-paste.ini service_host 127.0.0.1 Host name or IP address of Keystone service
auth_host 127.0.0.1 Host name or IP address of Keystone service
admin_tenant_name %SERVICE_TENANT_NAME% service
admin_user %SERVICE_USER% cinder
admin_password %SERVICE_PASSWORD% cinder
/etc/cinder/cinder.conf sql_connection sqlite:///$state_path/$sqlite_db URI for remote MySQL database
glance_host $my_ip Host name or IP address of Glance service
auth_strategy noauth keystone
rabbit_host localhost Host name or IP address of RabbitMQ service
volume_driver cinder.volume.drivers.solaris.zfs.ZFSVolumeDriver cinder.volume.drivers.solaris.zfs.ZFSISCSIDriver
zfs_volume_base rpool/cinder Alternate ZFS pool/data set
/etc/glance/glance-api.conf sql_connection sqlite:////var/lib/glance/glance.sqlite URI for remote MySQL database
rabbit_host localhost Host name or IP address of RabbitMQ service
auth_host 127.0.0.1 Host name or IP address of Keystone service
admin_tenant_name %SERVICE_TENANT_NAME% service
admin_user %SERVICE_USER% glance
admin_password %SERVICE_PASSWORD% glance
/etc/glance/glance-cache.conf auth_url http://127.0.0.1:5000/v2.0/ URI for Keystone location
admin_tenant_name %SERVICE_TENANT_NAME% service
admin_user %SERVICE_USER% glance
admin_password %SERVICE_PASSWORD% glance
/etc/glance/glance-registry.conf sql_connection sqlite:///glance.sqlite URI for remote MySQL database
auth_host 127.0.0.1 Host name or IP address of Keystone service
admin_tenant_name %SERVICE_TENANT_NAME% service
admin_user %SERVICE_USER% glance
admin_password %SERVICE_PASSWORD% glance
/etc/keystone/keystone.conf admin_token ADMIN Token created using # openssl rand -hex 10
connection sqlite:////var/lib/keystone/keystone.sqlite URI for remote MySQL database
/etc/neutron/dhcp_agent.ini evs_controller ssh://evsuser@localhost URI for Elastic Virtual Switch controller
/etc/neutron/l3_agent.ini router_id   Router UUID created using # neutron router-create
evs_controller ssh://evsuser@localhost URI for Elastic Virtual Switch controller
/etc/neutron/plugins/evs/evs_plugin.ini evs_controller ssh://evsuser@localhost URI for Elastic Virtual Switch controller
/etc/neutron/quantum.conf auth_strategy keystone noauth
rabbit_host localhost Host name or IP address of RabbitMQ service
auth_host 127.0.0.1 Host name or IP address of Keystone service
admin_tenant_name %SERVICE_TENANT_NAME% service
admin_user %SERVICE_USER% neutron
admin_password %SERVICE_PASSWORD% neutron
/etc/nova/api-paste.ini auth_host 127.0.0.1 Host name or IP address of Keystone service
admin_tenant_name %SERVICE_TENANT_NAME% service
admin_user %SERVICE_USER% nova
admin_password %SERVICE_PASSWORD% nova
/etc/nova/nova.conf auth_strategy noauth keystone
glance_host $my_ip Host name or IP address of Glance service
quantum_url http://127.0.0.1:9696 URI for Neutron service location
quantum_admin_username <None> neutron
quantum_admin_password <None> neutron
quantum_admin_tenant_name <None> service
quantum_admin_auth_url http://localhost:5000/v2.0 URI for Keystone service location
sql_connection sqlite:////nova/openstack/common/db/$sqlite_db URI for remote MySQL database
rabbit_host localhost Host name or IP address of RabbitMQ service

Appendix B: Release Notes

The following are known issues with Oracle Solaris 11.2 Beta with respect to OpenStack:

  • 18562372 Failed to create a new project under Horizon

    When trying to create a new project using the OpenStack dashboard, a pop-up window saying Error: An error occurred. Please try again. might appear.

    Workaround: Edit the file /etc/openstack_dashboard/local_settings.py and change the value of the OPENSTACK_KEYSTONE_DEFAULT_ROLE parameter from Member to _member_. Then restart the svc:/network/http:apache22 service for the change to take effect.

  • 18610375 Terminating a VM instance doesn't release floating ip associated with it

    When an instance that has a floating IP address associated with it is terminated, future use of the OpenStack dashboard might produce a web page with the message An unexpected error has occurred. Try refreshing the page. If that doesn't help, contact your local administrator.

    Workaround: Manually disassociate the floating IP address for the recently terminated instance.

    First, find the UUID of the floating IP address by matching the fixed IP address that was assigned to recently terminated instance:

    global$ neutron floatingip-list
    

    Then run the following command to disassociate the floating IP address:

    global$ neutron floatingip-disassociate <UUID>
    
  • 18658040 zfs.py can't handle terabyte pools

    If Cinder is assigned a ZFS data set that is larger than 1 TB, the svc:/application/openstack/cinder/cinder-volume:default service might end up in maintenance mode.

    Workaround: Set the zfs_volume_base parameter in /etc/cinder/cinder.conf to reference a data set in an alternate ZFS pool that is less than 1 TB and then clear the service in maintenance.

Appendix C: Known Limitations

In the initial OpenStack release included with Oracle Solaris 11.2 Beta, there are several limitations.

  • The size of the ZFS pool containing the ZFS data set that is dedicated to Cinder (by default, rpool/cinder) must be less than 1 TB.
  • There is no remote console access to instances via the OpenStack dashboard. Instead, users should upload a SSH keypair using Horizon, which will be pushed into the new instance's authorized_keys file for root.
  • At the current time, the version of Neutron included with Oracle Solaris 11.2 Beta supports only a single plugin for network virtualization. As a result, only Nova nodes running Oracle Solaris are supported to the fullest extent. Using Grizzly Nova nodes from other vendors can be done, but those nodes cannot participate in the virtual networks managed by Neutron.

See Also

See the OpenStack on Oracle Solaris Technology Spotlight web page.

Also see these additional resources:

About the Author

David Comay is a senior principal software engineer who has been at Sun and Oracle since 1996 when he began working in the networking area specializing in routing protocols and IPv6. He was the OS/Networking technical lead for the first two Oracle Solaris 8 update releases as well as for Oracle Solaris 9. He subsequently moved into the resource management area where he was a member of the original Oracle Solaris Zones project team. He led that team after its initial project integration through the end of Oracle Solaris 10 and for several of the subsequent Oracle Solaris 10 update releases. After driving the Oracle Solaris Modernization program and being the technical lead for the OpenSolaris binary releases as well as for Oracle Solaris 11, David is now the architect for the Oracle Solaris cloud strategy focusing initially on the integration of OpenStack with Oracle Solaris.

Revision 1.0, 04/28/2014
Revision 1.1, 06/20/2014
In the "Adding Images to Glance" section, changed this command:
global# zonecfg -z myzone -t create

to this:
global# zonecfg -z myzone create

And changed this command:
global# archiveadm -z create -z myzone /var/tmp/myzone.uar

to this:
global# archiveadm create -z myzone /var/tmp/myzone.uar

Follow us:
Blog | Facebook | Twitter | YouTube